In this document, we attempt to analyze the objective and subjective features aquired from three hand-controllers for device and surgeon performance assessment. The studied hand-controllers were: microsurgery-specific Excalibur3DS-I (“Excalibur”), and two general purpose haptic devices: High Definition and Phantom Premium 3.0 (“Premium”) and sigma.7 (“Sigma7”). A group of 30 participants selected among the experienced surgeons as well as the residents and fellows in the Department of Clinical Neurosciences at the University of Calgary were recruited to perform a peg-in-hole task consisting of cotton strip micromanipulation in a limited workspace, aimed to reflect the maneuvers performed during microsurgery. The multimedia supplementary files for the task and device structure is available at https://github.com/AmirBGitHub/smart-handcontroller/.

The reader can show any code chunk by clicking on the code button. We chose to make the default for the code hidden since we: (a) wanted to improve the readability of this document; and (b) assumed that the readers will not be interested in reading every code chunk.


1 Managing and Visualizing Data

The snippet below documents the list of R libraries that were used in this research. For convenience, we used the pacman package since it allows for installing/loading the needed libraries in one step.

rm(list = ls()) # clear global environment
graphics.off() # close all graphics
library(pacman) # needs to be installed first
p_load(
  R.matlab,
  plotly,
  extrafont,
  grDevices,
  gridExtra,
  stats,
  qlcMatrix,
  dplyr,
  stringr,
  tidyverse,
  utils,
  reshape2,
  ggplot2,
  devtools,
  knitr,
  kableExtra,
  data.table,
  RColorBrewer,
  readxl,
  abind,
  fmsb,
  chron,
  jmv,
  factoextra,
  imputeTS,
  ISLR,
  yarrr,
  caret,
  e1071,
  psych,
  nnet,
  lmtest,
  pscl
  )

In the snippet below, we extract the “.mat” files from the hand-controllers MATLAB software.


1.1 Extracting and Managing the Underlined Survey Features

In this section, the data from survey was extracted and put into R dataframe. The data include status of being trained (Training Status), years of experience as a surgeon (Year Level Category) expressed in four levels of 1-3, 4-6, 7-9, and 10 years, age range (Age Range Category) expressed in four levels of 23-30, 31-45, 46-60, and >61 years old. Binary level data on prior experience with hand-controller (Prior Experience) and having the experience of playing video games (Video Game Experience) were also included in the survey questions.

The questionnaire consisted of 21 questions labeled as A1-A21 with A1-A4 having multiple choice answers and A5-A21 having a selection scale of 0-100 with the increments of 10. The actual questions are as follows:

A1) How is the hand-controller’s workspace for surgery?

A2) How was the physical size of the hand-controller?

A3) How was the hand-controller’s weight?

A4) How many times were you unable to orient the tool in your desired direction?

A5) How was the smoothness of motion for the hand-controller?

A6) How was the smoothness of positioning in 3D space?

A7) How was the smoothness of orienting in 3D space?

A8) How comfortable was the oriention of the hand-controller?

A9) How confidently could move the hand-controller?

A10) How realistic was the force feedback?

A11) How many unexpected forces did you feel from the hand cotroller not matching its position in the screen?

A12) How helpful was the force feedback in accomplishing the task?

A13) Rate the difficulty of positioning the tool tip using the hand-controller.

A14) How was the agreement between the tool motion on the screen and your intended movement?

A15) Rate the difficulty in orienting the tool to pick up the pegs.

A16) Rate the difficulty of roll movement.

A17) Rate the difficulty of pitch movement.

A18) Rate the difficulty of yaw movement.

A19) Rate your fatigue level in the upper limb after operating the hand-controller.

A20) Rate your fatigue level in the wrist after operating the hand-controller.

A21) Rate the appearance of the hand-controller.

In order to have a consistent visualization, the results of questionnaire were normalized between 0-1 with numbers closer to one showing a better performance. Below are the visualized summary results for the data averaged across the participants and the summary of demographical and experience information. The survey summary results showed that 25 participants selected Excalibur as the superior hand-controller, while 2 participants selected Premium and 3 selected Sigma7. The visual results of the participants demographic and experience data presented in the following spider chart show that the participants selecting Excalibur tend to have higher experience and age levels.

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/DataReadSurvey.RData",
    sep = ""
  )
)

# converting date format completion time to double
tm_E <- c()
tm_P <- c()
tm_S <- c()
for (j in 1:30) {
  tm_E <- abind(tm_E,
                60 * 24 * as.numeric(times(strsplit(
                  toString(data.frame(subj_data_survey_main[5 * j, 23])[[1]]), " "
                )[[1]][2])))
  tm_P <- abind(tm_P,
                60 * 24 * as.numeric(times(strsplit(
                  toString(data.frame(subj_data_survey_main[5 * j, 18])[[1]]), " "
                )[[1]][2])))
  tm_S <- abind(tm_S,
                60 * 24 * as.numeric(times(strsplit(
                  toString(data.frame(subj_data_survey_main[5 * j, 13])[[1]]), " "
                )[[1]][2])))
}


tm_brk_E <-
  setNames(
    data.frame(matrix(
      data = NA, nrow = 30, ncol = 4
    )),
    c(
      "completion.time.brk.1",
      "completion.time.brk.2",
      "completion.time.brk.3",
      "completion.time.brk.4"
    )
  )
tm_brk_P <-
  setNames(
    data.frame(matrix(
      data = NA, nrow = 30, ncol = 4
    )),
    c(
      "completion.time.brk.1",
      "completion.time.brk.2",
      "completion.time.brk.3",
      "completion.time.brk.4"
    )
  )
tm_brk_S <-
  setNames(
    data.frame(matrix(
      data = NA, nrow = 30, ncol = 4
    )),
    c(
      "completion.time.brk.1",
      "completion.time.brk.2",
      "completion.time.brk.3",
      "completion.time.brk.4"
    )
  )

for (j in 1:30) {
  for (k in 1:4) {
    tm_brk_E[j, k] <-
      60 * 24 * as.numeric(times(strsplit(toString(
        data.frame(subj_data_survey_main[(5 * (j - 1) + k), 22])[[1]]
      ), " ")[[1]][2]))
    tm_brk_P[j, k] <-
      60 * 24 * as.numeric(times(strsplit(toString(
        data.frame(subj_data_survey_main[(5 * (j - 1) + k), 17])[[1]]
      ), " ")[[1]][2]))
    tm_brk_S[j, k] <-
      60 * 24 * as.numeric(times(strsplit(toString(
        data.frame(subj_data_survey_main[(5 * (j - 1) + k), 12])[[1]]
      ), " ")[[1]][2]))
  }
}


# putting the survey results in one file and standardizing based different desire answers
desire_answers <-
  c(3,
    3,
    3,
    1,
    100,
    100,
    100,
    100,
    100,
    100,
    0,
    100,
    0,
    100,
    0,
    0,
    0,
    0,
    0,
    0,
    100)

subj_survey <- list(
  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num"),
    setNames(data.frame(as.numeric(
      factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 3])[[1]]))),
             "train.status"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 4]),
             "years.cat"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 5]),
             "age.range.cat"),
    setNames(data.frame(as.numeric(
      factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 6])[[1]])) - 1),
             "prior.experience.status"),
    setNames(data.frame(as.numeric(
      factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 7])[[1]])) - 1),
             "videogame.experience.status"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 9]),
             "order")
  ),

  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.E"),
    tm_brk_E,
    setNames(data.frame(tm_E),
         "completion.time.avg"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 24]),
             "error.count"),
    1 - abs(subj_data_survery_trans[3 * c(1:30), 4:7]
            - t(replicate(30, desire_answers[1:4]))),
    1 - abs(subj_data_survery_trans[3 * c(1:30), 8:24]
            - t(replicate(30, desire_answers[5:21])))
  ),

  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.P"),
    tm_brk_P, 
    setNames(data.frame(tm_P),
             "completion.time.avg"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 19]),
             "error.count"),
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 4:7]
            - t(replicate(30, desire_answers[1:4]))),
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 8:24]
            - t(replicate(30, desire_answers[5:21])))
  ),

  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.S"),
    tm_brk_S,
    setNames(data.frame(tm_S),
             "completion.time.avg"),
    setNames(data.frame(
      subj_data_survey_main[5 * c(1:30) - 4, 14]),
             "error.count"),
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 4:7]
            - t(replicate(30, desire_answers[1:4]))),
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 8:24]
            - t(replicate(30, desire_answers[5:21])))
  )
)


# scaling the data between 0 and 1
normalit01 <- function(m) {
  return ((m - min(m, na.rm = TRUE)) / (max(m, na.rm = TRUE) - min(m, na.rm = TRUE)))
}
subj_survey_norm <- list(
  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num"),
    setNames(data.frame(as.numeric(
      factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 3])[[1]]))),
             "train.status"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 4]),
             "years.cat"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 5]),
             "age.range.cat"),
    setNames(data.frame(as.numeric(
      factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 6])[[1]])) - 1),
             "prior.experience.status"),
    setNames(data.frame(as.numeric(
      factor(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 7])[[1]])) - 1),
             "videogame.experience.status"),
    setNames(data.frame(subj_data_survey_main[5 * c(1:30) - 4, 9]),
             "order")
  ),
  
  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.E"),
    normalit01(tm_brk_E),
    setNames(normalit01(data.frame(tm_E)),
             "completion.time.avg"),
    setNames(normalit01(data.frame(
      subj_data_survey_main[5 * c(1:30) - 4, 24])),
             "error.count"),
    1 - abs(subj_data_survery_trans[3 * c(1:30), 4:7]
            - t(replicate(30, desire_answers[1:4]))) / 
      4,
    1 - abs(subj_data_survery_trans[3 * c(1:30), 8:24]
            - t(replicate(30, desire_answers[5:21]))) / 
      100
  ),
  
  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.P"),
    normalit01(tm_brk_P),
    setNames(normalit01(data.frame(tm_P)),
             "completion.time.avg"),
    setNames(normalit01(data.frame(
      subj_data_survey_main[5 * c(1:30) - 4, 19])),
             "error.count"),
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 4:7]
            - t(replicate(30, desire_answers[1:4]))) /
      4,
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 1, 8:24]
            - t(replicate(30, desire_answers[5:21]))) /
      100
  ),
  
  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.S"),
    normalit01(tm_brk_S),
    setNames(normalit01(data.frame(tm_S)),
             "completion.time.avg"),
    setNames(normalit01(data.frame(
      subj_data_survey_main[5 * c(1:30) - 4, 14])),
             "error.count"),
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 4:7]
            - t(replicate(30, desire_answers[1:4]))) /
      4,
    1 - abs(subj_data_survery_trans[3 * c(1:30) - 2, 8:24]
            - t(replicate(30, desire_answers[5:21]))) /
      100
  )
)


# summarizing the survey results 
survery_results_E <- t(setNames(data.frame(zapsmall(
  colMeans(data.matrix(subj_survey[[2]])[, 6:28], na.rm = TRUE),
  digits = 4
)), "Excalibur.avgerage"))
survery_results_P <- t(setNames(data.frame(zapsmall(
  colMeans(data.matrix(subj_survey[[3]])[, 6:28], na.rm = TRUE),
  digits = 4
)), "Premium.avgerage"))
survery_results_S <- t(setNames(data.frame(zapsmall(
  colMeans(data.matrix(subj_survey[[4]])[, 6:28], na.rm = TRUE),
  digits = 4
)), "Simga7.avgerage"))


# mean value of each question across all subjects
sm <-
  cbind(
    setNames(data.frame(c(1:23)), "order"),
    setNames(data.frame(
      c(
        "Completion Time Average",
        "Error Count",
        "A1: Ideal Workspace Size",
        "A2: Ideal Hand-controller Size",
        "A3: Ideal Grip Weight",
        "A4: Tool Orienting Easiness",
        "A5: Motion Smoothness",
        "A6: Positioning Smoothness",
        "A7: Orienting Smoothness",
        "A8: Orienting Comfiness",
        "A9: Motion Confidence",
        "A10: Force Feedback Realness",
        "A11: Non-unexpected Forces",
        "A12: Force Feedback Usefulness",
        "A13: Tooltip Positioning Easiness",
        "A14: Motion Expected",
        "A15: Pickup Tool Orienting Easiness",
        "A16: Tool Roll Easiness",
        "A17: Tool Pitch Easiness",
        "A18: Tool Yaw Easiness",
        "A19: Upper Lim Comfiness",
        "A20: Wrist Comfiness",
        "A21: Hand-controller Appearance"
      )
    ),
    "question.name"),
    setNames(data.frame(zapsmall(
      colMeans(data.matrix(subj_survey_norm[[2]]
      )[, 6:28],
      na.rm = TRUE), digits = 4
    )), "Excalibur Average"),
    setNames(data.frame(zapsmall(
      colMeans(data.matrix(subj_survey_norm[[3]]
      )[, 6:28],
      na.rm = TRUE), digits = 4
    )), "Premium Average"),
    setNames(data.frame(zapsmall(
      colMeans(data.matrix(subj_survey_norm[[4]]
      )[, 6:28],
      na.rm = TRUE), digits = 4
    )), "Sigma7 Average")
  )

sm$question.name <- factor(sm$question.name , levels = sm$question.name )

df_transformed <- melt(
  sm,
  id.vars = "question.name",
  measure.vars = c("Excalibur Average",
                   "Premium Average",
                   "Sigma7 Average")
)

# plot of performance of differenet hand-controllers based on survey
# the plot is colored by the hand-controller groupName 'variable'
# the results are scaled between 0 and 1 
ggplot(df_transformed,
       aes(
         x = question.name,
         y = value,
         fill = variable,
         label = round(value, digits = 1)
       )) +
  scale_fill_manual(values=c("mediumaquamarine", "salmon", "cornflowerblue")) + 
  geom_bar(stat = "identity") +
  theme_minimal() +
  geom_text(size = 6, position = position_stack(vjust = 0.5)) +
  ggtitle(label = "Performance of Differenet Hand-controllers ",
          subtitle = "According to the Users from the Survey") +
  labs(
    caption = paste(
      "Note: The normalized summary results of the questionnaire data",
      "averaged across the participants along with\n the completion",
      "time and error count results. The data are scaled between",
      "0-1. Significant variables according to\n MANOVA tests in",
      "Section 2.1 are labeled with '*'."
    )
  ) +
  xlab("Survey Question") +
  ylab("Normalized Performance") +
  labs(fill = "Hand-controller") +
  geom_text(x=1, y=1.3, label="*", size = 10) +
  geom_text(x=2, y=0.7, label="*", size = 10) +
  geom_text(x=3, y=2.6, label="*", size = 10) +
  geom_text(x=5, y=2.8, label="*", size = 10) +
  geom_text(x=9, y=2.1, label="*", size = 10) +
  geom_text(x=10, y=2.1, label="*", size = 10) +
  geom_text(x=11, y=2.2, label="*", size = 10) +
  geom_text(x=12, y=1.65, label="*", size = 10) +
  geom_text(x=13, y=1.95, label="*", size = 10) +
  geom_text(x=17, y=2.05, label="*", size = 10) +
  geom_text(x=19, y=1.9, label="*", size = 10) +
  geom_text(x=21, y=2.35, label="*", size = 10) +
  geom_text(x=22, y=2.15, label="*", size = 10) +
  theme(
    plot.title = element_text(
      size = 36,
      face = "bold",
      hjust = 0.5,
      margin = margin(
        t = 0,
        r = 0,
        b = 10,
        l = 0
      )
    ),
    plot.subtitle = element_text(
      size = 28,
      face = "bold",
      hjust = 0.5,
      margin = margin(
        t = 0,
        r = 0,
        b = 10,
        l = 0
      )
    ),
    plot.caption = element_text(
      size = 22,
      hjust = 0,
      margin = margin(
        t = 30,
        r = 0,
        b = 0,
        l = 0
      )
    ),
    axis.text.x = element_text(
      size = 24,
      angle = 45,
      hjust = 1,
      margin = margin(
        t = 0,
        r = 0,
        b = 20,
        l = 0
      )
    ),
    axis.text.y = element_text(
      size = 24,
      hjust = 1,
      margin = margin(
        t = 0,
        r = 0,
        b = 0,
        l = 15
      )
    ),
    axis.title.x = element_text(size = 32),
    axis.title.y = element_text(size = 32,
                                margin = margin(
                                  t = 0,
                                  r = 0,
                                  b = 0,
                                  l = 120
                                )),
    legend.text = element_text(size = 22),
    legend.title = element_text(size = 24)
  )

############################################################################

# plot of subject demographics vs. performance of differenet Hand-controllers
# mean value of performance question results for each subject
qm <- cbind(
  setNames(data.frame(c(1:30)), "subject.num"),
  setNames(data.frame(rowMeans(
    subj_survey_norm[[2]][, 8:28],
    na.rm = TRUE
  )), "Excalibur Average"),
  setNames(data.frame(rowMeans(
    subj_survey_norm[[3]][, 8:28],
    na.rm = TRUE
  )), "Premium Average"),
  setNames(data.frame(rowMeans(
    subj_survey_norm[[4]][, 8:28],
    na.rm = TRUE
  )), "Sigma7 Average")
)

# summary of subject demographics picking each device
ss_E <- subj_survey[[1]][which(apply(qm[, 2:4], 1, which.max) == 1), ]
ss_P <- subj_survey[[1]][which(apply(qm[, 2:4], 1, which.max) == 2), ]
ss_S <- subj_survey[[1]][which(apply(qm[, 2:4], 1, which.max) == 3), ]

sd <- cbind(
  setNames(data.frame(c(
    dim(ss_E)[1],
    zapsmall(colMeans(array(ss_E[, 2:6]),
                      na.rm = TRUE), digits = 2)
  )), "Excalibur"),
  setNames(data.frame(c(
    dim(ss_P)[1],
    zapsmall(colMeans(array(ss_P[, 2:6]),
                      na.rm = TRUE), digits = 2)
  )), "Premium"),
  setNames(data.frame(c(
    dim(ss_S)[1],
    zapsmall(colMeans(array(ss_S[, 2:6]),
                      na.rm = TRUE), digits = 2)
  )), "Sigma7")
)

row.names(sd) <-
  c(
    "Subject Counts Selected Device",
    "Training Status",
    "Year Level Category",
    "Age Range Category",
    "Prior Experience",
    "Video Game Experience"
  )

# To use the fmsb package, I have to add 2 lines to the dataframe:
# the max and min of each topic to show on the plot!
sp_data <- data.frame(t(sd)[, 2:6])
sp_data <- rbind(c(1, 5, 4, 1, 1) , c(0, 1, 1, 0, 0), sp_data)

# making the spider graph
colors_border = c(rgb(0.4, 0.86, 0.66, 0.9),
                  rgb(1, 0.54, 0.41, 0.9) ,
                  rgb(0.39, 0.58, 0.92, 0.9))
colors_in = c(rgb(0.4, 0.86, 0.66, 0.4),
              rgb(1, 0.54, 0.41, 0.5) ,
              rgb(0.39, 0.58, 0.92, 0.4))
radarchart(
  sp_data,
  axistype = 1 ,
  # custom polygon
  pcol = colors_border ,
  pfcol = colors_in ,
  plwd = 4 ,
  plty = 1,
  # custom the grid
  cglcol = "grey",
  cglty = 1,
  axislabcol = "grey",
  caxislabels = seq(0, 1, 0.25),
  cglwd = 8,
  calcex = 2,
  # custom labels
  vlabels = c(
    "Training Status",
    "Year Level Category",
    "Age Range Category",
    "Prior Experience",
    "Video Game Experience"
  ),
  vlcex = 2,
  title = "Summary of the Demographics of the Subjects Picking Each Device",
  cex.main = 2.5,
)
legend(
  x = 0.7,
  y = 1,
  legend = c(
    "Excalibur: 25 Subjects",
    "Premium: 2 Subjects",
    "Sigma7: 3 Subject"
  ),
  bty = "n",
  pch = 20 ,
  col = colors_in ,
  text.col = "grey",
  cex = 2,
  pt.cex = 8
)

save(subj_survey,survery_results_E,survery_results_P,survery_results_S,
     file = paste("~/Desktop/Canada/neuroArm/neuroArmPLUS",
       "/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",  sep=""))

save(subj_survey_norm,survery_results_E,survery_results_P,survery_results_S,
     file = paste("~/Desktop/Canada/neuroArm/neuroArmPLUS",
       "/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData",  sep=""))

1.2 Extracting and Managing the Underlined Sensor Features used in User Performace Metrics

Following the summary results for the questionnaire, the direct data from the sensors and hand-controllers are extracted and presented in this section. A header snapshot of the raw data are presented showing 6 rows starting from row 5000 for one of the participants. The recorded raw data included time in seconds, tool tip position, force feedback, gimbal and arm angles derived through Denavit–Hartenberg parameters all presented in x, y, and z axes, pinch value, pedal usage, and the number of clutch usage. The experiment trials were repeated 4 times with 4 peg-in-hole tasks within each trial forming a total of 16 experiment trials.

The recorded data were converted to the features associated with the hand-controller users introduced in Hoshyarmanesh et al. 2019, being the sum of arm angles in the hand-controller (Arm Angle Sum), time to complete the task (Completion Time), the root mean squared of force feed back (Force RMS), the sum of gimbal angles in the hand-controller (Gimbal Angle Sum), the number of clutch use during the task (Number of Clutches), and the path length at the hand-controller gripper to accomplish the task (Travel Distance). Please note that for Force RMS and Travel Distance the data from slave robot end effector (Kuka) was utilized rather than the hand-controller itself. The calculated features were summarized for each participant by finding the average across the 16 task trials. In order to be consistent with the questionnaire data and for the sake simplicity in our comparisons, the extracted features were normalized between 0-1. Below are the visualized summary results for the data averaged across the participants.

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/DataReadSensor.RData",
    sep = ""
  )
)

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/DataReadSensorKuka.RData",
    sep = ""
  )
)


# Printing the top six rows of Subject 1's data as an example after row 5000
head(subject1_features_E[5000:5500, ]) %>% round(digits = 3)
abbrv_device <- c("E", "P", "S")
completion_time <- array(NA, dim = c(30, 16, 3))
travel_distance <- array(NA, dim = c(30, 16, 3))
num_clutches <- array(NA, dim = c(30, 16, 3))
sum_gimbangle <- array(NA, dim = c(30, 16, 3))
sum_armangle <- array(NA, dim = c(30, 16, 3))
sos_force <- array(NA, dim = c(30, 16, 3))
for (i in 1:3) {
  for (j in 1:30) {
    for (k in 1:16) {
      #print(paste0("device: ", i))
      #print(paste0("subject: ", j))
      #print(paste0("trial: ", k))
      task_data <- get(paste0("subject", j,
                              "_features_", 
                              abbrv_device[i]))[get(paste0("subject", j, 
                                                           "_features_",      
                                                           abbrv_device[i]))[, "experim_num"] == k,]
      task_data_kuka <- get(paste0("Kuka_subject", j,
                                   "_features_", 
                                   abbrv_device[i]))[get(paste0("Kuka_subject", j, 
                                                                "_features_",      
                                                                abbrv_device[i]))[, "experim_num"] == k,]
      
      if (dim(task_data)[1] != 0) {
        # finding the completion time (CT)
        completion_time[j, k, i] = max(task_data[, "time_in_sec"], na.rm =
                                         FALSE)
        - min(task_data[, "time_in_sec"], na.rm =
                FALSE)
        
        # finding the travel distance (TD)
        pos <-
          task_data_kuka[, c("position_x", "position_y", "position_z")]
        pos_diff <- pos[-1, ] - pos[-nrow(pos), ]
        travel_distance[j, k, i] = sum(sqrt(rowSums(pos_diff * pos_diff)))
        
        # finding the number of clutches (NC)
        num_clutches[j, k, i] = max(task_data[, "clutch_num"], na.rm = FALSE)
        
        # finding the sum of the variation of gimbal angle (sigma(delta_theta_4-6))
        gimb <-
          task_data[, c("gimbangle_x", "gimbangle_y", "gimbangle_z")]
        gimb_diff <- gimb[-1, ] - gimb[-nrow(gimb), ]
        sum_theta <- colSums(gimb_diff, na.rm = FALSE, dims = 1)
        sum_gimbangle[j, k, i] = sum(abs(sum_theta))
        
        # finding the sum of the variation of arm angle (sigma(delta_theta_1-3))
        arm <-
          task_data[, c("armangle_x", "armangle_y", "armangle_z")]
        arm_diff <- arm[-1, ] - arm[-nrow(arm), ]
        sum_theta <- colSums(arm_diff, na.rm = FALSE, dims = 1)
        sum_armangle[j, k, i] = sum(abs(sum_theta))
        
        # finding the squared root of sum of the squared forces (SOSF)
        frc <- task_data_kuka[, c("force_x", "force_y", "force_z")]
        sos_force[j, k, i] = sum(sqrt(rowSums(frc * frc)))
      } else{
        completion_time[j, k, i] = NA
        travel_distance[j, k, i] = NA
        num_clutches[j, k, i] = NA
        sum_gimbangle[j, k, i] = NA
        sum_armangle[j, k, i] = NA
        sos_force[j, k, i] = NA
      }
      
    }
  }
}

# removing the outlier in sos_force
sos_force[sos_force < 1] <- NA

completion_time[completion_time == 0] <- NA
travel_distance[travel_distance == 0] <- NA
# num_clutches[num_clutches == 0] <- NA
sum_gimbangle[sum_gimbangle == 0] <- NA
sum_armangle[sum_armangle == 0] <- NA
sos_force[sos_force == 0] <- NA

completion_time_norm <- completion_time/1000
travel_distance_norm <-  travel_distance/1000
num_clutches_norm <- num_clutches/10
sum_gimbangle_norm <- sum_gimbangle/5
sum_armangle_norm <- sum_armangle
sos_force_norm <- sos_force/100000



# putting the sensor results in one file
subj_feature <- list(
  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.E"),
    setNames(data.frame(rowMeans(completion_time[, , 1] ,na.rm = TRUE)),
             "completion.time"),
    setNames(data.frame(rowMeans(travel_distance[, , 1] ,na.rm = TRUE)),
             "travel.distance"),
    setNames(data.frame(rowMeans(num_clutches[, , 1] ,na.rm = TRUE)),
             "number.clutches"),
    setNames(data.frame(rowMeans(sos_force[, , 1] ,na.rm = TRUE)),
             "sos.force"),
    setNames(data.frame(rowMeans(sum_gimbangle[, , 1] ,na.rm = TRUE)),
             "sum.gimbal.angle"),
    setNames(data.frame(rowMeans(sum_armangle[, , 1] ,na.rm = TRUE)),
             "sum.arm.angle")

  ),

  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.P"),
    setNames(data.frame(rowMeans(completion_time[, , 2] ,na.rm = TRUE)),
             "completion.time"),
    setNames(data.frame(rowMeans(travel_distance[, , 2] ,na.rm = TRUE)),
             "travel.distance"),
    setNames(data.frame(rowMeans(num_clutches[, , 2] ,na.rm = TRUE)),
             "number.clutches"),
    setNames(data.frame(rowMeans(sos_force[, , 2] ,na.rm = TRUE)),
             "sos.force"),
    setNames(data.frame(rowMeans(sum_gimbangle[, , 2] ,na.rm = TRUE)),
             "sum.gimbal.angle"),
    setNames(data.frame(rowMeans(sum_armangle[, , 2] ,na.rm = TRUE)),
             "sum.arm.angle")
  ),

  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.S"),
    setNames(data.frame(rowMeans(completion_time[, , 3] ,na.rm = TRUE)),
             "completion.time"),
    setNames(data.frame(rowMeans(travel_distance[, , 3] ,na.rm = TRUE)),
             "travel.distance"),
    setNames(data.frame(rowMeans(num_clutches[, , 3] ,na.rm = TRUE)),
             "number.clutches"),
    setNames(data.frame(rowMeans(sos_force[, , 3] ,na.rm = TRUE)),
             "sos.force"),
    setNames(data.frame(rowMeans(sum_gimbangle[, , 3] ,na.rm = TRUE)),
             "sum.gimbal.angle"),
    setNames(data.frame(rowMeans(sum_armangle[, , 3] ,na.rm = TRUE)),
             "sum.arm.angle")
  )
)


# putting the sensor results in one file (scaled)
subj_feature_norm <- list(
  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.E"),
    setNames(data.frame(rowMeans(completion_time_norm[, , 1] ,na.rm = TRUE)),
             "completion.time"),
    setNames(data.frame(rowMeans(travel_distance_norm[, , 1] ,na.rm = TRUE)),
             "travel.distance"),
    setNames(data.frame(rowMeans(num_clutches_norm[, , 1] ,na.rm = TRUE)),
             "number.clutches"),
    setNames(data.frame(rowMeans(sos_force_norm[, , 1] ,na.rm = TRUE)),
             "sos.force"),
    setNames(data.frame(rowMeans(sum_gimbangle_norm[, , 1] ,na.rm = TRUE)),
             "sum.gimbal.angle"),
    setNames(data.frame(rowMeans(sum_armangle_norm[, , 1] ,na.rm = TRUE)),
             "sum.arm.angle")
  ),

  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.P"),
    setNames(data.frame(rowMeans(completion_time_norm[, , 2] ,na.rm = TRUE)),
             "completion.time"),
    setNames(data.frame(rowMeans(travel_distance_norm[, , 2] ,na.rm = TRUE)),
             "travel.distance"),
    setNames(data.frame(rowMeans(num_clutches_norm[, , 2] ,na.rm = TRUE)),
             "number.clutches"),
    setNames(data.frame(rowMeans(sos_force_norm[, , 2] ,na.rm = TRUE)),
             "sos.force"),
    setNames(data.frame(rowMeans(sum_gimbangle_norm[, , 2] ,na.rm = TRUE)),
             "sum.gimbal.angle"),
    setNames(data.frame(rowMeans(sum_armangle_norm[, , 2] ,na.rm = TRUE)),
             "sum.arm.angle")
  ),

  cbind(
    setNames(data.frame(c(1:30)),
             "subject.num.S"),
    setNames(data.frame(rowMeans(completion_time_norm[, , 3] ,na.rm = TRUE)),
             "completion.time"),
    setNames(data.frame(rowMeans(travel_distance_norm[, , 3] ,na.rm = TRUE)),
             "travel.distance"),
    setNames(data.frame(rowMeans(num_clutches_norm[, , 3] ,na.rm = TRUE)),
             "number.clutches"),
    setNames(data.frame(rowMeans(sos_force_norm[, , 3] ,na.rm = TRUE)),
             "sos.force"),
    setNames(data.frame(rowMeans(sum_gimbangle_norm[, , 3] ,na.rm = TRUE)),
             "sum.gimbal.angle"),
    setNames(data.frame(rowMeans(sum_armangle_norm[, , 3] ,na.rm = TRUE)),
             "sum.arm.angle")

  )
)


save(
  subj_feature,
  completion_time,
  travel_distance,
  num_clutches,
  sum_gimbangle,
  sum_armangle,
  sos_force,
  file = paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
    sep = ""
  )
)


save(
  subj_feature_norm,
  completion_time_norm,
  travel_distance_norm,
  num_clutches_norm,
  sum_gimbangle_norm,
  sum_armangle_norm,
  sos_force_norm,
  file = paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
    sep = ""
  )
)
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
    sep = ""
  )
)

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
    sep = ""
  )
)

# mean value of each feature across all subjects
sm <-
  cbind(
    setNames(data.frame(
      c(
        "Travel Distance (m)",
        "Number of Clutches",
        "Force RMS (N)",
        "Gimbal Angle Sum (Rad)",
        "Arm Angle Sum (Rad)"
      )
    ),
    "feature.name"),
    setNames(data.frame(zapsmall(
      colMeans(data.matrix(subj_feature_norm[[1]])[, 3:7], na.rm = TRUE), digits = 4
    )), "Excalibur Average"),
    setNames(data.frame(zapsmall(
      colMeans(data.matrix(subj_feature_norm[[2]])[, 3:7], na.rm = TRUE), digits = 4
    )), "Premium Average"),
    setNames(data.frame(zapsmall(
      colMeans(data.matrix(subj_feature_norm[[3]])[, 3:7], na.rm = TRUE), digits = 4
    )), "Sigma7 Average")
  )

        
df_transformed <- melt(
  sm,
  id.vars = "feature.name",
  measure.vars = c("Excalibur Average",
                   "Premium Average",
                   "Sigma7 Average")
)

# plot of performance of differenet hand-controllers based on sensor
# the plot is colored by the hand-controller groupName 'variable'
# the results are scaled between 0 and 1
ggplot(df_transformed,
       aes(
         x = feature.name,
         y = value,
         fill = variable,
         label = value
       )) +
  geom_text(x=1, y=0.6, label="*", size = 12) +
  geom_text(x=2, y=0.55, label="*", size = 12) +
  geom_text(x=5, y=0.77, label="*", size = 12) +
  scale_fill_manual(values=c("mediumaquamarine", "salmon", "cornflowerblue")) + 
  geom_bar(stat = "identity") +
  theme_minimal() +
  geom_text(size = 6, position = position_stack(vjust = 0.5)) +
  ggtitle(label = "Performance Features for Differenet Hand-controllers ",
          subtitle = "According to the Users from the Sensor Data") +
  annotate('text', x = 3, y = 1.6, 
        label = "The~data~are~scaled~between~0~and~1~
                 where~the~Gimbal~Angle~Sum~is~divided~by~5" , parse = TRUE, size=8) +
  annotate('text', x = 3, y = 1.4, 
        label = "Force~RMS~by~10^{5}~-
                 ~Number~of~Clutches~by~10~
                 and~Travel~Distance~by~10^{3}" , parse = TRUE, size=8) +
  labs(
    caption = paste(
      "Note: The normalized summary results of the performance features",
      "from the recorded data by the hand-controller\n averaged across",
      "the participants. Please note that the arm angle data was not recorded by Sigma7.",
      "Significant\n variables according to MANOVA tests in Section 2.2 are labeled with '*'."
    )
  ) +
  xlab("Feature from Sensor Data") +
  ylab("Normalized Feature") +
  labs(fill = "Hand-controller") +
  theme(
    plot.title = element_text(
      size = 36,
      face = "bold",
      hjust = 0.5,
      margin = margin(
        t = 0,
        r = 0,
        b = 10,
        l = 0
      )
    ),
    plot.subtitle = element_text(
      size = 28,
      face = "bold",
      hjust = 0.5,
      margin = margin(
        t = 0,
        r = 0,
        b = 10,
        l = 0
      )
    ),
    plot.caption = element_text(
      size = 22,
      hjust = 0,
      margin = margin(
        t = 30,
        r = 0,
        b = 0,
        l = 0
      )
    ),
    axis.text.x = element_text(
      size = 24,
      angle = 45,
      hjust = 1,
      margin = margin(
        t = 0,
        r = 0,
        b = 20,
        l = 0
      )
    ),
    axis.text.y = element_text(
      size = 24,
      hjust = 1,
      margin = margin(
        t = 0,
        r = 0,
        b = 0,
        l = 15
      )
    ),
    axis.title.x = element_text(size = 32),
    axis.title.y = element_text(size = 32,
                                margin = margin(
                                  t = 0,
                                  r = 0,
                                  b = 0,
                                  l = 120
                                )),
    legend.text = element_text(size = 22),
    legend.title = element_text(size = 24)
  )


1.3 Calculating the User Performance Metrics

In this section, the performance metrics as defined in Hoshyarmanesh et al. 2019 were calculated based on the features derived in the previous section. The performance metrics included the hand-controller maneuverability expressed in terms of the level of angular movement over time (Performance (angular maneuver)), time to finish the task (Performance (completion time)), the force exerted over the path length of the trajectory (Performance (force over distance)), the number of clutch use during the task {(Performance (number of clutches)), and the path length of the overall trajectory (Performance (travel distance)).

For the similar reasons mentioned above, the calculated performance metrics were scaled between 0-1. Below are the visualized summary results for the performance metrics data averaged across the participants. In this figure, the values closer to 1 indicate a better performance.

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
    sep = ""
  )
)

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
    sep = ""
  )
)


ct_ref <- 15 # reference completion time 15 sec
td_ref <- 0.2 # reference travel distance 0.2 m
nc_ref <- 1  # reference number of clutches
b <- 25 # b = NC_max - NC_red = 26 - 1 = 25 (max value was actually 62 -> outlier)

perf_u_ct_all <- array(NA, dim = c(1, 3))
perf_u_td_all <- array(NA, dim = c(1, 3))
perf_u_nc_all <- array(NA, dim = c(1, 3))
perf_u_dt_all <- array(NA, dim = c(1, 3))
perf_u_sf_all <- array(NA, dim = c(1, 3))
for (i in 1:3) {
  perf_u_ct_all[, i] = ct_ref / mean(subj_survey[[i+1]]$completion.time.avg, na.rm = TRUE) 
  perf_u_td_all[, i] = td_ref / mean(rowMeans(travel_distance[, , i], na.rm = TRUE), na.rm = TRUE) * 500
  perf_u_nc_all[, i] = sqrt(1 - ((mean( rowMeans(num_clutches[, , i], na.rm = TRUE), na.rm = TRUE) - 
                                nc_ref) / b) ^ 2) / 5
  perf_u_dt_all[, i] = mean(rowMeans(sum_gimbangle[, , i], na.rm = TRUE), na.rm = TRUE) / 
                       (30 * mean(subj_survey[[i+1]]$completion.time.avg, na.rm = TRUE) * 
                             mean(rowMeans(travel_distance[, , i], na.rm = TRUE), na.rm = TRUE)) * 10^5
  perf_u_sf_all[, i] = mean(rowMeans(sos_force[, , i], na.rm = TRUE), na.rm = TRUE) / 
                       (mean(subj_survey[[i+1]]$completion.time.avg, na.rm = TRUE) * 
                        mean(rowMeans(travel_distance[, , i], na.rm = TRUE), na.rm = TRUE)) / 5
}

# mean value of each feature across all subjects
sm <-
  cbind(
    setNames(data.frame(
      c(
        "Performance (completion time)",
        "Performance (travel distance)",
        "Performance (number of clutches)",
        "Performance (angular maneuver)",
        "Performance (force over distance)"
      )
    ),
    "perf.name"),
    setNames(data.frame(zapsmall(
      c(perf_u_ct_all[, 1], 
        perf_u_td_all[, 1], 
        perf_u_nc_all[, 1], 
        perf_u_dt_all[, 1], 
        perf_u_sf_all[, 1]), digits = 4
    )), "Excalibur Average"),
    setNames(data.frame(zapsmall(
      c(perf_u_ct_all[, 2], 
        perf_u_td_all[, 2], 
        perf_u_nc_all[, 2], 
        perf_u_dt_all[, 2], 
        perf_u_sf_all[, 2]), digits = 4
    )), "Premium Average"),
    setNames(data.frame(zapsmall(
      c(perf_u_ct_all[, 3], 
        perf_u_td_all[, 3], 
        perf_u_nc_all[, 3], 
        perf_u_dt_all[, 3], 
        perf_u_sf_all[, 3]), digits = 4
    )), "Sigma7 Average")
  )


df_transformed <- melt(
  sm,
  id.vars = "perf.name",
  measure.vars = c("Excalibur Average",
                   "Premium Average",
                   "Sigma7 Average")
)

# plot of performance of differenet hand-controllers based on sensor
# the plot is colored by the hand-controller groupName 'variable'
# the results are normalized between 0 and 1 (1 showing the best performance)
ggplot(df_transformed,
       aes(
         x = perf.name,
         y = value,
         fill = variable,
         label = value
       )) +
  scale_fill_manual(values=c("mediumaquamarine", "salmon", "cornflowerblue")) + 
  geom_bar(stat = "identity") +
  theme_minimal() +
  geom_text(size = 6, position = position_stack(vjust = 0.5)) +
  ggtitle(label = "Comparison of Performance Metrics for Differenet Hand-controllers ",
          subtitle = "According to the Users from the Sensor Data") +
  annotate('text', x = 3, y = 1.8, 
        label = "The~data~are~scaled~between~0~and~1~
                 where~the~Angular~Maneuver~is~multiplied~by~10^{5}" , parse = TRUE, size=8) +
  annotate('text', x = 3, y = 1.6, 
        label = "Travel~Distance~by~5~x~10^{2}~
                  and~Force~over~Distance~and~Number~of~Clutches~divided~by~5" , parse = TRUE, size=8) +
  labs(
    caption = paste(
      "Note: The normalized summary results of the performance metrics",
      "calculated from the performance features\n averaged across the",
      "participants."
    )
  ) +
  xlab("Performance Metric from Sensor Data") +
  ylab("Normalized Performace Level") +
  labs(fill = "Hand-controller") +
  theme(
    plot.title = element_text(
      size = 36,
      face = "bold",
      hjust = 0.5,
      margin = margin(
        t = 0,
        r = 0,
        b = 10,
        l = 0
      )
    ),
    plot.subtitle = element_text(
      size = 28,
      face = "bold",
      hjust = 0.5,
      margin = margin(
        t = 0,
        r = 0,
        b = 10,
        l = 0
      )
    ),
    plot.caption = element_text(
      size = 22,
      hjust = 0,
      margin = margin(
        t = 30,
        r = 0,
        b = 0,
        l = 0
      )
    ),
    axis.text.x = element_text(
      size = 24,
      angle = 45,
      hjust = 1,
      margin = margin(
        t = 0,
        r = 0,
        b = 20,
        l = 0
      )
    ),
    axis.text.y = element_text(
      size = 24,
      hjust = 1,
      margin = margin(
        t = 0,
        r = 0,
        b = 0,
        l = 15
      )
    ),
    axis.title.x = element_text(size = 32),
    axis.title.y = element_text(size = 32,
                                margin = margin(
                                  t = 0,
                                  r = 0,
                                  b = 0,
                                  l = 120
                                )),
    legend.text = element_text(size = 22),
    legend.title = element_text(size = 24)
  )

save(
  perf_u_ct_all,
  perf_u_td_all,
  perf_u_nc_all,
  perf_u_dt_all,
  perf_u_sf_all,
  file = paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/PerformanceUsersAll.RData",
    sep = ""
  )
)

1.4 Sub-task Performance Breakdowns

We also provide the breakdowns of the 4 trials of peg-in-hole task for Completion Time as well as subtasks with 4 different peg shapes. The results for Completion Time , Travel Distance, and the Number of Clutches are shown across the 3 different hand-controllers. The pattern for Completion Time is decreasing through the first 3 trials for Premium and Sigma7, while increasing in the last trial. However, this pattern is decreasing across the 4 trials of Excalibur (with a minor increase in the 2nd trial) showing an improvement through the course of experiment. On the other hand, while the average value of features differ for different hand-controllers, the pattern of change in the features for each subtask of peg manipulation is consistent among different devices. Surprisingly, this pattern is ascending for the Completion Time meaning that the shape of peg has a larger effect on the time rather than being accustomed to the task. In addition, the results indicate a higher values of Travel Distance for Peg 2 as well as the Number of Clutches. The easiest peg was found to be Peg 1 according to all three features.

We anticipated a learning curve both over the four trials and between the three hand-controllers. Although we observed a tendency towards improvement in trial time over the course of the experiments per user, mean fourth trial time was slower than mean third for Premium and Sigma7. During the experiments it was noted that users became more comfortable with the task and therefore faster. However, by the fourth trial users may have experienced muscle and mental fatigue and therefore, less precise in their movements and a slower time to complete the task (Slack et al. 2008). Surprisingly, this pattern showed an improvement among the users for Excalibur that can be attributed to its specific tool design and higher maneuverability which imposes minimal fatigue in such a precise task. The second explanation for the slower fourth time in other hand-controllers may relate to the board design. The subtask completion time breakdowns showed an ascending trend from the first peg to the last peg. The first peg exhibited the fastest task completion time in spite of being the first task, and this is likely because the first peg only required 30 degrees of tool roll, while the subsequent pegs required up to 120 degrees of tool roll. Therefore, the nature of each specific task may plays an important role in the user performance. Moving the pegs from the angled surface to the flat surface was easier than moving the pegs in the opposite direction. It is conceivably more difficult to simultaneously orient a peg at an angle in both the pitch and roll orientations, than when maneuvering in only one dimension.

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
    sep = ""
  )
)

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
    sep = ""
  )
)


CT_task <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Excalibur", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(completion_time[, c(1, 5, 9, 13), 1], na.rm = TRUE),
                                        rowMeans(completion_time[, c(2, 6, 10, 14), 1], na.rm = TRUE),
                                        rowMeans(completion_time[, c(3, 7, 11, 15), 1], na.rm = TRUE),
                                        rowMeans(completion_time[, c(4, 8, 12, 16), 1], na.rm = TRUE))), "completion.time")),
            cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Premium", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(completion_time[, c(1, 5, 9, 13), 2], na.rm = TRUE),
                                        rowMeans(completion_time[, c(2, 6, 10, 14), 2], na.rm = TRUE),
                                        rowMeans(completion_time[, c(3, 7, 11, 15), 2], na.rm = TRUE),
                                        rowMeans(completion_time[, c(4, 8, 12, 16), 2], na.rm = TRUE))), "completion.time")),
            cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Sigma7", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(completion_time[, c(1, 5, 9, 13), 3], na.rm = TRUE),
                                        rowMeans(completion_time[, c(2, 6, 10, 14), 3], na.rm = TRUE),
                                        rowMeans(completion_time[, c(3, 7, 11, 15), 3], na.rm = TRUE),
                                        rowMeans(completion_time[, c(4, 8, 12, 16), 3], na.rm = TRUE))), "completion.time"))
          )



TD <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Excalibur", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(travel_distance[, c(1, 5, 9, 13), 1], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(2, 6, 10, 14), 1], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(3, 7, 11, 15), 1], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(4, 8, 12, 16), 1], na.rm = TRUE))), "travel.distance")),
            cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Premium", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(travel_distance[, c(1, 5, 9, 13), 2], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(2, 6, 10, 14), 2], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(3, 7, 11, 15), 2], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(4, 8, 12, 16), 2], na.rm = TRUE))), "travel.distance")),
            cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Sigma7", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(travel_distance[, c(1, 5, 9, 13), 3], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(2, 6, 10, 14), 3], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(3, 7, 11, 15), 3], na.rm = TRUE),
                                        rowMeans(travel_distance[, c(4, 8, 12, 16), 3], na.rm = TRUE))), "travel.distance"))
          )


NC <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Excalibur", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(num_clutches[, c(1, 5, 9, 13), 1], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(2, 6, 10, 14), 1], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(3, 7, 11, 15), 1], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(4, 8, 12, 16), 1], na.rm = TRUE))), "number.clutches")),
            cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Premium", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(num_clutches[, c(1, 5, 9, 13), 2], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(2, 6, 10, 14), 2], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(3, 7, 11, 15), 2], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(4, 8, 12, 16), 2], na.rm = TRUE))), "number.clutches")),
            cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Sigma7", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Peg"),
                  setNames(data.frame(c(rowMeans(num_clutches[, c(1, 5, 9, 13), 3], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(2, 6, 10, 14), 3], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(3, 7, 11, 15), 3], na.rm = TRUE),
                                        rowMeans(num_clutches[, c(4, 8, 12, 16), 3], na.rm = TRUE))), "number.clutches"))
          )

##############################################################

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData",
    sep = ""
  )
)



CT_trial <- rbind(cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Excalibur", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Trial"),
                  setNames(data.frame(c(subj_survey[[2]]$completion.time.brk.1,
                                        subj_survey[[2]]$completion.time.brk.2,
                                        subj_survey[[2]]$completion.time.brk.3,
                                        subj_survey[[2]]$completion.time.brk.4)), "completion.time")),
                  cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Premium", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Trial"),
                  setNames(data.frame(c(subj_survey[[3]]$completion.time.brk.1,
                                        subj_survey[[3]]$completion.time.brk.2,
                                        subj_survey[[3]]$completion.time.brk.3,
                                        subj_survey[[3]]$completion.time.brk.4)), "completion.time")), 
            cbind(setNames(data.frame(rep(c(1:30), 4)), "user"),
                  setNames(data.frame(rep("Sigma7", 120)), "Device"),
                  setNames(data.frame(c(rep("1", 30), rep("2", 30), rep("3", 30), rep("4", 30))), "Trial"),
                  setNames(data.frame(c(subj_survey[[4]]$completion.time.brk.1,
                                        subj_survey[[4]]$completion.time.brk.2,
                                        subj_survey[[4]]$completion.time.brk.3,
                                        subj_survey[[4]]$completion.time.brk.4)), "completion.time")) 
          )

1.4.1 Completion Time (per trial)

pirateplot(
  formula = completion.time ~ Trial + Device,
  data = CT_trial,
  main = "Pirateplots of Completion Time Breakdown over Different Trials",
  ylab = "Completion Time (minutes - based on manual recordings)",
  theme = 2,
  pal = "pony",
  inf.f.o = .6,
  inf.disp = "bean",
  point.o = .3,   
  # Turn up points
  bean.b.o = .4,
  # Turn down bean borders
  quant = c(.1, .9),
  # 10th and 90th quantiles
  quant.col = "black" # Black quantile lines
) 

1.4.2 Completion Time (per peg)

pirateplot(
  formula = completion.time/60 ~ Peg + Device,
  data = CT_task,
  main = "Pirateplots of Completion Time Breakdown over Different Subtasks",
  ylab = "Completion Time (minutes - based on sensor data)",
  theme = 2,
  pal = "pony",
  inf.f.o = .6,
  inf.disp = "bean",
  point.o = .3,   
  # Turn up points
  bean.b.o = .4,
  # Turn down bean borders
  quant = c(.1, .9),
  # 10th and 90th quantiles
  quant.col = "black" # Black quantile lines
) 

1.4.3 Travel Distance

pirateplot(
  formula = travel.distance ~ Peg + Device,
  data = TD,
  main = "Pirateplots of Travel Distance Breakdown over Different Subtasks",
  ylab = "Travel Distance (cm)",
  theme = 2,
  pal = "pony",
  inf.f.o = .6,
  inf.disp = "bean",
  point.o = .3,   
  # Turn up points
  bean.b.o = .4,
  # Turn down bean borders
  quant = c(.1, .9),
  # 10th and 90th quantiles
  quant.col = "black" # Black quantile lines
)

1.4.4 Number of Clutches

pirateplot(
  formula = number.clutches ~ Peg + Device,
  data = NC,
  main = "Pirateplots of Number of Clutches Breakdown over Different Subtasks",
  ylab = "Number of Clutches",
  theme = 2,    
  pal = "pony",
  inf.f.o = .6,
  inf.disp = "bean",
  point.o = .3,   
  # Turn up points
  bean.b.o = .4,
  # Turn down bean borders
  quant = c(.1, .9),
  # 10th and 90th quantiles
  quant.col = "black" # Black quantile lines
) 


2 Statistical Tests and Relationship Establishments

2.1 Performing the MANOVA and MANCOVA Tests on Survey Results

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
    sep = ""
  )
)

df_transformed <-
  rbind(
    cbind(subj_survey[[1]][, 2:7], subj_survey[[2]][, 6:28],
          device = 'Excalibur'),
    cbind(subj_survey[[1]][, 2:7], subj_survey[[3]][, 6:28],
          device = 'Premium'),
    cbind(subj_survey[[1]][, 2:7], subj_survey[[4]][, 6:28],
          device = 'Sigma7')
  )

In this section, the statistical tests of MANOVA (comparing the subjective performance results considering devices as the factor) and MANCOVA (comparing the subjective performance results considering devices as the factor and the demographic variables as the covariates) were performed on the survey results. Repeated measures ANOVA was not utilized here since the comparisons between the response variables (each question or feature) is not of interest, rather the effects of observed factor (different hand-controllers) on the multivariate responses is important (Friedrich et al. 2018). The significance codes for differences between devices in these tests are provided below each test result. The hand-controllers were significantly different in terms of various survey subjective results. Among them, the completion time (completion.time.avg), error count (error.count), the workspace (A1), the weight (A3), orientation smoothness and comfiness (A7 and A8), unexpected force feedback (A11), and wrist and upper arm fatigue level (A19 and A20) were the most significant responses. Additionally, Year Level Category (years.cat) and Video Game Experience (videogame.experience.status) were the significant covariates.

2.1.1 MANOVA Test Results

Overall test summary:

# MANOVA test
res.man <- manova(
  cbind(
    completion.time.avg,
    error.count,
    A1,
    A2,
    A3,
    A4,
    A5,
    A6,
    A7,
    A8,
    A9,
    A10,
    A11,
    A12,
    A13,
    A14,
    A15,
    A16,
    A17,
    A18,
    A19,
    A20,
    A21
  ) ~ device,
  data = df_transformed
)
summary(res.man)
##           Df Pillai approx F num Df den Df    Pr(>F)    
## device     2 1.1169   3.1343     46    114 4.378e-07 ***
## Residuals 78                                            
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Summary of the analysis of variance model:

# Look to see which differ
summary.aov(res.man)
##  Response completion.time.avg :
##             Df Sum Sq Mean Sq F value  Pr(>F)  
## device       2   7798  3899.1  4.6933 0.01189 *
## Residuals   78  64801   830.8                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response error.count :
##             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device       2  29.863 14.9315  4.4475 0.01482 *
## Residuals   78 261.865  3.3572                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A1 :
##             Df  Sum Sq Mean Sq F value    Pr(>F)    
## device       2  6.7858  3.3929   9.417 0.0002171 ***
## Residuals   78 28.1031  0.3603                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A2 :
##             Df  Sum Sq Mean Sq F value Pr(>F)
## device       2  1.4656 0.73279  1.8887 0.1581
## Residuals   78 30.2628 0.38798               
## 
##  Response A3 :
##             Df  Sum Sq Mean Sq F value   Pr(>F)   
## device       2  2.6944 1.34721  5.5423 0.005615 **
## Residuals   78 18.9599 0.24308                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A4 :
##             Df Sum Sq Mean Sq F value Pr(>F)
## device       2   2.66 1.33018  1.9452 0.1498
## Residuals   78  53.34 0.68384               
## 
##  Response A5 :
##             Df  Sum Sq Mean Sq F value Pr(>F)
## device       2   418.6  209.28  0.8854 0.4167
## Residuals   78 18437.0  236.37               
## 
##  Response A6 :
##             Df  Sum Sq Mean Sq F value Pr(>F)
## device       2  1212.6  606.28   2.141 0.1244
## Residuals   78 22087.4  283.17               
## 
##  Response A7 :
##             Df Sum Sq Mean Sq F value    Pr(>F)    
## device       2   6724  3362.1  8.2697 0.0005531 ***
## Residuals   78  31712   406.6                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A8 :
##             Df Sum Sq Mean Sq F value   Pr(>F)   
## device       2   5201 2600.44  5.9564 0.003914 **
## Residuals   78  34053  436.58                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A9 :
##             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device       2  2718.9 1359.46  4.1533 0.01932 *
## Residuals   78 25531.1  327.32                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A10 :
##             Df Sum Sq Mean Sq F value  Pr(>F)  
## device       2   5554 2777.03  4.6881 0.01195 *
## Residuals   78  46204  592.36                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A11 :
##             Df Sum Sq Mean Sq F value    Pr(>F)    
## device       2  13829  6914.5  10.836 7.035e-05 ***
## Residuals   78  49771   638.1                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A12 :
##             Df Sum Sq Mean Sq F value Pr(>F)
## device       2   1705  852.37  1.6836 0.1924
## Residuals   78  39490  506.29               
## 
##  Response A13 :
##             Df Sum Sq Mean Sq F value  Pr(>F)  
## device       2   3353 1676.50  2.6873 0.07436 .
## Residuals   78  48661  623.85                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A14 :
##             Df  Sum Sq Mean Sq F value Pr(>F)
## device       2   702.6   351.3  1.4748 0.2351
## Residuals   78 18579.5   238.2               
## 
##  Response A15 :
##             Df Sum Sq Mean Sq F value  Pr(>F)  
## device       2   4114  2056.8  3.7431 0.02804 *
## Residuals   78  42861   549.5                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A16 :
##             Df Sum Sq Mean Sq F value Pr(>F)
## device       2   2903 1451.43  2.0514 0.1354
## Residuals   78  55186  707.51               
## 
##  Response A17 :
##             Df Sum Sq Mean Sq F value  Pr(>F)  
## device       2   5737 2868.58  3.9741 0.02272 *
## Residuals   78  56302  721.82                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A18 :
##             Df Sum Sq Mean Sq F value Pr(>F)
## device       2   2896 1448.02   2.369 0.1003
## Residuals   78  47676  611.23               
## 
##  Response A19 :
##             Df Sum Sq Mean Sq F value   Pr(>F)   
## device       2   6096 3047.92  5.5168 0.005742 **
## Residuals   78  43093  552.48                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A20 :
##             Df Sum Sq Mean Sq F value    Pr(>F)    
## device       2  14356  7178.0   13.82 7.285e-06 ***
## Residuals   78  40513   519.4                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A21 :
##             Df Sum Sq Mean Sq F value Pr(>F)
## device       2    701  350.75   0.719 0.4905
## Residuals   78  38053  487.86               
## 
## 9 observations deleted due to missingness

2.1.2 MANCOVA Test Results

Overall test summary:

# MANCOVA test
res.manc <- manova(
  cbind(
    completion.time.avg,
    error.count,
    A1,
    A2,
    A3,
    A4,
    A5,
    A6,
    A7,
    A8,
    A9,
    A10,
    A11,
    A12,
    A13,
    A14,
    A15,
    A16,
    A17,
    A18,
    A19,
    A20,
    A21
  ) ~ device +
    train.status + years.cat + age.range.cat + prior.experience.status +
    videogame.experience.status,
  data = df_transformed
)

summary(res.manc)
##                             Df  Pillai approx F num Df den Df    Pr(>F)
## device                       2 1.18577  2.97591     46     94 3.897e-06
## years.cat                    1 0.60617  3.07830     23     46 0.0005777
## age.range.cat                1 0.32916  0.98132     23     46 0.5045648
## prior.experience.status      1 0.27956  0.77609     23     46 0.7407980
## videogame.experience.status  1 0.41512  1.41951     23     46 0.1539565
## Residuals                   68                                         
##                                
## device                      ***
## years.cat                   ***
## age.range.cat                  
## prior.experience.status        
## videogame.experience.status    
## Residuals                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Summary of the analysis of variance model:

# Look to see which differ
summary.aov(res.manc)
##  Response completion.time.avg :
##                             Df Sum Sq Mean Sq F value  Pr(>F)  
## device                       2   7209  3604.5  4.0898 0.02103 *
## years.cat                    1    125   124.7  0.1414 0.70803  
## age.range.cat                1    116   115.8  0.1314 0.71812  
## prior.experience.status      1    661   661.4  0.7505 0.38937  
## videogame.experience.status  1    925   925.4  1.0500 0.30914  
## Residuals                   68  59931   881.3                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response error.count :
##                             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device                       2  30.958 15.4790  4.4957 0.01466 *
## years.cat                    1   2.803  2.8030  0.8141 0.37010  
## age.range.cat                1   2.307  2.3067  0.6699 0.41593  
## prior.experience.status      1  11.865 11.8652  3.4461 0.06773 .
## videogame.experience.status  1   2.606  2.6056  0.7568 0.38740  
## Residuals                   68 234.128  3.4431                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A1 :
##                             Df  Sum Sq Mean Sq F value    Pr(>F)    
## device                       2  6.4745  3.2372  8.7659 0.0004102 ***
## years.cat                    1  1.0560  1.0560  2.8594 0.0954215 .  
## age.range.cat                1  0.8434  0.8434  2.2837 0.1353732    
## prior.experience.status      1  0.0458  0.0458  0.1239 0.7259223    
## videogame.experience.status  1  0.0148  0.0148  0.0401 0.8418278    
## Residuals                   68 25.1123  0.3693                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A2 :
##                             Df  Sum Sq Mean Sq F value Pr(>F)
## device                       2  1.2379 0.61897  1.4578 0.2399
## years.cat                    1  0.1586 0.15857  0.3735 0.5432
## age.range.cat                1  0.0059 0.00589  0.0139 0.9066
## prior.experience.status      1  0.0686 0.06858  0.1615 0.6890
## videogame.experience.status  1  0.0030 0.00301  0.0071 0.9332
## Residuals                   68 28.8727 0.42460               
## 
##  Response A3 :
##                             Df  Sum Sq Mean Sq F value   Pr(>F)   
## device                       2  2.5421 1.27103  5.0794 0.008791 **
## years.cat                    1  0.0196 0.01959  0.0783 0.780500   
## age.range.cat                1  0.6881 0.68809  2.7498 0.101871   
## prior.experience.status      1  0.0018 0.00182  0.0073 0.932272   
## videogame.experience.status  1  0.0528 0.05281  0.2110 0.647425   
## Residuals                   68 17.0156 0.25023                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A4 :
##                             Df Sum Sq Mean Sq F value  Pr(>F)  
## device                       2  2.417  1.2085  2.0792 0.13290  
## years.cat                    1  3.177  3.1765  5.4651 0.02235 *
## age.range.cat                1  0.284  0.2842  0.4890 0.48678  
## prior.experience.status      1  0.524  0.5239  0.9013 0.34578  
## videogame.experience.status  1  2.074  2.0745  3.5691 0.06313 .
## Residuals                   68 39.524  0.5812                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A5 :
##                             Df  Sum Sq Mean Sq F value   Pr(>F)   
## device                       2   518.9  259.44  1.2138 0.303430   
## years.cat                    1  1450.2 1450.23  6.7849 0.011282 * 
## age.range.cat                1   110.8  110.82  0.5185 0.473971   
## prior.experience.status      1   140.9  140.87  0.6591 0.419725   
## videogame.experience.status  1  1649.2 1649.25  7.7160 0.007068 **
## Residuals                   68 14534.6  213.74                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A6 :
##                             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device                       2  1484.6  742.31  2.6582 0.07735 .
## years.cat                    1   582.2  582.20  2.0848 0.15336  
## age.range.cat                1    10.1   10.13  0.0363 0.84952  
## prior.experience.status      1   381.4  381.40  1.3658 0.24662  
## videogame.experience.status  1   694.3  694.31  2.4863 0.11948  
## Residuals                   68 18989.3  279.25                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A7 :
##                             Df  Sum Sq Mean Sq F value    Pr(>F)    
## device                       2  7773.3  3886.6  9.9294 0.0001647 ***
## years.cat                    1   197.0   197.0  0.5033 0.4804732    
## age.range.cat                1     1.9     1.9  0.0048 0.9447893    
## prior.experience.status      1    14.2    14.2  0.0362 0.8497085    
## videogame.experience.status  1   521.1   521.1  1.3313 0.2526113    
## Residuals                   68 26617.2   391.4                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A8 :
##                             Df  Sum Sq Mean Sq F value   Pr(>F)   
## device                       2  4851.4 2425.72  5.3231 0.007117 **
## years.cat                    1    81.1   81.14  0.1781 0.674374   
## age.range.cat                1   632.2  632.16  1.3872 0.242979   
## prior.experience.status      1     2.7    2.68  0.0059 0.939085   
## videogame.experience.status  1   261.6  261.56  0.5740 0.451301   
## Residuals                   68 30987.7  455.70                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A9 :
##                             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device                       2  2044.6 1022.31  3.5533 0.03406 *
## years.cat                    1   247.5  247.55  0.8604 0.35691  
## age.range.cat                1     2.1    2.15  0.0075 0.93139  
## prior.experience.status      1   400.0  400.04  1.3904 0.24244  
## videogame.experience.status  1   785.9  785.87  2.7314 0.10300  
## Residuals                   68 19564.4  287.71                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A10 :
##                             Df Sum Sq Mean Sq F value  Pr(>F)  
## device                       2   5112 2556.11  4.5210 0.01434 *
## years.cat                    1    328  328.34  0.5807 0.44866  
## age.range.cat                1    546  545.70  0.9652 0.32937  
## prior.experience.status      1     82   81.77  0.1446 0.70491  
## videogame.experience.status  1   2952 2952.40  5.2219 0.02543 *
## Residuals                   68  38446  565.39                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A11 :
##                             Df Sum Sq Mean Sq F value    Pr(>F)    
## device                       2  12673  6336.4 10.1694 0.0001368 ***
## years.cat                    1     82    81.7  0.1312 0.7183312    
## age.range.cat                1     26    26.0  0.0417 0.8387788    
## prior.experience.status      1      4     4.2  0.0067 0.9349954    
## videogame.experience.status  1   1160  1160.5  1.8625 0.1768408    
## Residuals                   68  42370   623.1                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A12 :
##                             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device                       2  1270.1  635.03  1.4153 0.24991  
## years.cat                    1  2698.3 2698.31  6.0139 0.01677 *
## age.range.cat                1  1304.3 1304.34  2.9070 0.09276 .
## prior.experience.status      1   521.9  521.87  1.1631 0.28463  
## videogame.experience.status  1   273.7  273.72  0.6101 0.43748  
## Residuals                   68 30510.4  448.68                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A13 :
##                             Df Sum Sq Mean Sq F value  Pr(>F)  
## device                       2   3905 1952.40  3.1979 0.04706 *
## years.cat                    1    390  390.22  0.6392 0.42680  
## age.range.cat                1     25   24.54  0.0402 0.84169  
## prior.experience.status      1      5    5.06  0.0083 0.92772  
## videogame.experience.status  1   2985 2984.67  4.8887 0.03040 *
## Residuals                   68  41515  610.52                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A14 :
##                             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device                       2  1146.8  573.39  2.6127 0.08069 .
## years.cat                    1     4.2    4.23  0.0193 0.89004  
## age.range.cat                1   125.9  125.88  0.5736 0.45147  
## prior.experience.status      1   179.3  179.31  0.8170 0.36924  
## videogame.experience.status  1  1162.1 1162.13  5.2952 0.02446 *
## Residuals                   68 14923.7  219.47                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A15 :
##                             Df Sum Sq Mean Sq F value  Pr(>F)  
## device                       2   3066  1533.1  2.8139 0.06697 .
## years.cat                    1    182   181.8  0.3336 0.56544  
## age.range.cat                1     52    52.1  0.0956 0.75815  
## prior.experience.status      1     82    82.0  0.1505 0.69930  
## videogame.experience.status  1   3191  3191.4  5.8576 0.01819 *
## Residuals                   68  37049   544.8                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A16 :
##                             Df Sum Sq Mean Sq F value   Pr(>F)   
## device                       2   2665  1332.4  2.1389 0.125643   
## years.cat                    1   1476  1476.4  2.3699 0.128332   
## age.range.cat                1    477   477.4  0.7663 0.384435   
## prior.experience.status      1      4     3.6  0.0058 0.939398   
## videogame.experience.status  1   5334  5333.6  8.5619 0.004663 **
## Residuals                   68  42361   623.0                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A17 :
##                             Df Sum Sq Mean Sq F value  Pr(>F)  
## device                       2   5043 2521.43  3.4304 0.03807 *
## years.cat                    1    634  634.15  0.8628 0.35625  
## age.range.cat                1    176  175.89  0.2393 0.62628  
## prior.experience.status      1     18   18.24  0.0248 0.87528  
## videogame.experience.status  1    665  664.51  0.9041 0.34506  
## Residuals                   68  49981  735.01                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A18 :
##                             Df Sum Sq Mean Sq F value  Pr(>F)  
## device                       2   2594 1297.21  2.2881 0.10922  
## years.cat                    1    171  170.86  0.3014 0.58483  
## age.range.cat                1    376  376.44  0.6640 0.41800  
## prior.experience.status      1    657  656.85  1.1586 0.28556  
## videogame.experience.status  1   1691 1691.08  2.9828 0.08869 .
## Residuals                   68  38552  566.95                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A19 :
##                             Df Sum Sq Mean Sq F value   Pr(>F)   
## device                       2   5548 2774.19  5.1957 0.007947 **
## years.cat                    1    416  415.91  0.7789 0.380573   
## age.range.cat                1    192  192.27  0.3601 0.550451   
## prior.experience.status      1      0    0.01  0.0000 0.996504   
## videogame.experience.status  1   1677 1677.22  3.1412 0.080817 . 
## Residuals                   68  36308  533.94                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A20 :
##                             Df Sum Sq Mean Sq F value    Pr(>F)    
## device                       2  15819  7909.7 16.7654 1.205e-06 ***
## years.cat                    1    249   248.9  0.5276    0.4701    
## age.range.cat                1    225   224.8  0.4765    0.4924    
## prior.experience.status      1     30    30.1  0.0638    0.8014    
## videogame.experience.status  1   1262  1261.7  2.6743    0.1066    
## Residuals                   68  32082   471.8                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response A21 :
##                             Df  Sum Sq Mean Sq F value   Pr(>F)   
## device                       2  1144.0   572.0  1.4547 0.240635   
## years.cat                    1  2273.4  2273.4  5.7817 0.018923 * 
## age.range.cat                1  3173.0  3173.0  8.0696 0.005934 **
## prior.experience.status      1   271.0   271.0  0.6892 0.409350   
## videogame.experience.status  1   150.9   150.9  0.3837 0.537705   
## Residuals                   68 26737.8   393.2                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## 15 observations deleted due to missingness

2.1.3 Post-hoc Tukey’s Test for the Pair-wise Differences among the Hand-controllers

In the snippet below, the post-hoc Tukey’s tests were applied on the significant variables obtained in the previous section. The confidence intervals within the family-wise confidence level graphs that does not include zero indicates a significant different between the corresponding pair of hand-controllers.

2.1.3.1 Completion Time

# completion.time.avg
print("completion.time.avg:")
## [1] "completion.time.avg:"
tk.lm <- lm(completion.time.avg
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                        diff          lwr      upr     p adj
## Premium-Excalibur 17.583333  -0.04115869 35.20783 0.0506736
## Sigma7-Excalibur  22.925000   5.30050798 40.54949 0.0072498
## Sigma7-Premium     5.341667 -12.28282536 22.96616 0.7507495
plot(tukey.test, sub="for Completion Time", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.2 Error Count

# error.count
print("error.count:")
## [1] "error.count:"
tk.lm <- lm(error.count
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                         diff        lwr       upr     p adj
## Premium-Excalibur  1.4666667  0.3729454 2.5603880 0.0054450
## Sigma7-Excalibur   0.6793103 -0.4237993 1.7824200 0.3109912
## Sigma7-Premium    -0.7873563 -1.8904660 0.3157533 0.2101975
plot(tukey.test, sub="for Error Count", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.3 Question A1

# A1
print("A1:")
## [1] "A1:"
tk.lm <- lm(A1
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                          diff        lwr        upr     p adj
## Premium-Excalibur -0.07142857 -0.4549965  0.3121394 0.8969577
## Sigma7-Excalibur  -0.60714286 -0.9873898 -0.2268959 0.0007714
## Sigma7-Premium    -0.53571429 -0.9159612 -0.1554674 0.0033322
plot(tukey.test, sub="for A1", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.4 Question A3

# A3
print("A3:")
## [1] "A3:"
tk.lm <- lm(A3
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                         diff         lwr        upr     p adj
## Premium-Excalibur -0.4285714 -0.74425452 -0.1128883 0.0048588
## Sigma7-Excalibur  -0.1330049 -0.44595478  0.1799449 0.5698889
## Sigma7-Premium     0.2955665 -0.01738335  0.6085164 0.0682112
plot(tukey.test, sub="for A3", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.5 Question A7

# A7
print("A7:")
## [1] "A7:"
tk.lm <- lm(A7
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                        diff        lwr        upr     p adj
## Premium-Excalibur -24.66667 -37.067038 -12.266295 0.0000242
## Sigma7-Excalibur  -13.66667 -26.067038  -1.266295 0.0271586
## Sigma7-Premium     11.00000  -1.400371  23.400371 0.0926331
plot(tukey.test, sub="for A7", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.6 Question A8

# A8
print("A8:")
## [1] "A8:"
tk.lm <- lm(A8
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                    diff        lwr       upr     p adj
## Premium-Excalibur -21.5 -34.068178 -8.931822 0.0002920
## Sigma7-Excalibur  -15.0 -27.568178 -2.431822 0.0151089
## Sigma7-Premium      6.5  -6.068178 19.068178 0.4369766
plot(tukey.test, sub="for A8", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.7 Question A9

# A9
print("A9:")
## [1] "A9:"
tk.lm <- lm(A9
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                         diff       lwr        upr     p adj
## Premium-Excalibur -12.500000 -24.05055 -0.9494473 0.0307086
## Sigma7-Excalibur  -15.666667 -27.21722 -4.1161140 0.0048630
## Sigma7-Premium     -3.166667 -14.71722  8.3838860 0.7907591
plot(tukey.test, sub="for A9", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.8 Question A10

# A10
print("A10:")
## [1] "A10:"
tk.lm <- lm(A10
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                         diff       lwr        upr     p adj
## Premium-Excalibur -14.833333 -29.45759 -0.2090763 0.0460408
## Sigma7-Excalibur  -20.500000 -35.12426 -5.8757430 0.0034779
## Sigma7-Premium     -5.666667 -20.29092  8.9575903 0.6266987
plot(tukey.test, sub="for A10", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.9 Question A11

# A11
print("A11:")
## [1] "A11:"
tk.lm <- lm(A11
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                        diff       lwr        upr     p adj
## Premium-Excalibur -10.00000 -25.59120   5.591199 0.2822846
## Sigma7-Excalibur  -31.66667 -47.25787 -16.075468 0.0000164
## Sigma7-Premium    -21.66667 -37.25787  -6.075468 0.0038059
plot(tukey.test, sub="for A11", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.10 Question A15

# A15
print("A15:")
## [1] "A15:"
tk.lm <- lm(A15
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                         diff        lwr       upr     p adj
## Premium-Excalibur -17.063218 -31.547755 -2.578682 0.0167377
## Sigma7-Excalibur  -12.166667 -26.527929  2.194595 0.1134022
## Sigma7-Premium      4.896552  -9.587985 19.381089 0.7001895
plot(tukey.test, sub="for A15", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.11 Question A17

# A17
print("A17:")
## [1] "A17:"
tk.lm <- lm(A17
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                        diff       lwr      upr     p adj
## Premium-Excalibur -21.12069 -37.41757 -4.82381 0.0075059
## Sigma7-Excalibur  -14.50000 -30.65818  1.65818 0.0877779
## Sigma7-Premium      6.62069  -9.67619 22.91757 0.5984164
plot(tukey.test, sub="for A17", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.12 Question A19

# A19
print("A19:")
## [1] "A19:"
tk.lm <- lm(A19
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                         diff        lwr       upr     p adj
## Premium-Excalibur -19.166667 -33.278971 -5.054363 0.0047997
## Sigma7-Excalibur   -5.166667 -19.278971  8.945637 0.6587051
## Sigma7-Premium     14.000000  -0.112304 28.112304 0.0523273
plot(tukey.test, sub="for A19", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.1.3.13 Question A20

# A20
print("A20:")
## [1] "A20:"
tk.lm <- lm(A20
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                         diff        lwr        upr     p adj
## Premium-Excalibur -30.666667 -44.586633 -16.746700 0.0000031
## Sigma7-Excalibur   -8.333333 -22.253300   5.586633 0.3313368
## Sigma7-Premium     22.333333   8.413367  36.253300 0.0007104
plot(tukey.test, sub="for A20", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")


2.2 Performing the MANOVA and MANCOVA Tests on Sensor Feature Results

# reading data from the saved file

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
    sep = ""
  )
)

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensor.RData",
    sep = ""
  )
)

df_transformed <-
  rbind(
    cbind(subj_survey[[1]][, 2:7], subj_feature[[1]][, 2:6],
          device = 'Excalibur'),
    cbind(subj_survey[[1]][, 2:7], subj_feature[[2]][, 2:6],
          device = 'Premium'),
    cbind(subj_survey[[1]][, 2:7], subj_feature[[3]][, 2:6],
          device = 'Sigma7')
  )

In this section, the statistical tests of MANOVA (comparing the subjective performance results considering devices as the factor) and MANCOVA (comparing the subjective performance results considering devices as the factor and the demographic variables as the covariates) were performed on the sensor feature results. The significance codes for differences between devices in these tests are provided below each test result. The hand-controllers were significantly different in terms of various sensor feature results. Among them, the arm angle sum (sum.arm.angle), force RMS (sos.force), and travel distance (travel.distance) were the significant features for performance.

2.2.1 MANOVA Test Results

Overall test summary:

# MANOVA test
res.man <- manova(
  cbind(
    travel.distance,
    number.clutches,
    sum.gimbal.angle,
    sum.arm.angle,
    sos.force
  ) ~ device,
  data = df_transformed
)
summary(res.man)
##           Df  Pillai approx F num Df den Df    Pr(>F)    
## device     1 0.34407   5.6652      5     54 0.0002861 ***
## Residuals 58                                             
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Summary of the analysis of variance model:

# Look to see which differ
summary.aov(res.man)
##  Response travel.distance :
##             Df Sum Sq Mean Sq F value   Pr(>F)    
## device       1  45838   45838  13.826 0.000454 ***
## Residuals   58 192295    3315                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response number.clutches :
##             Df Sum Sq Mean Sq F value Pr(>F)  
## device       1  13.27 13.2697  2.8026 0.0995 .
## Residuals   58 274.62  4.7348                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response sum.gimbal.angle :
##             Df Sum Sq  Mean Sq F value Pr(>F)
## device       1 0.0197 0.019681  0.3243 0.5712
## Residuals   58 3.5197 0.060684               
## 
##  Response sum.arm.angle :
##             Df  Sum Sq  Mean Sq F value    Pr(>F)    
## device       1 0.07005 0.070050  12.716 0.0007343 ***
## Residuals   58 0.31951 0.005509                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response sos.force :
##             Df     Sum Sq   Mean Sq F value   Pr(>F)   
## device       1  352385858 352385858  7.7173 0.007357 **
## Residuals   58 2648389909  45661895                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## 30 observations deleted due to missingness

2.2.2 MANCOVA Test Results

Overall test summary:

# MANCOVA test
res.manc <- manova(
  cbind(
    travel.distance,
    number.clutches,
    sum.gimbal.angle,
    sum.arm.angle,
    sos.force
  ) ~ device +
    train.status + years.cat + age.range.cat + prior.experience.status +
    videogame.experience.status,
  data = df_transformed
)
summary(res.manc)
##                             Df  Pillai approx F num Df den Df    Pr(>F)
## device                       1 0.40090   6.1563      5     46 0.0001907
## years.cat                    1 0.31463   4.2233      5     46 0.0030479
## age.range.cat                1 0.02274   0.2141      5     46 0.9547967
## prior.experience.status      1 0.20022   2.3032      5     46 0.0598622
## videogame.experience.status  1 0.16991   1.8832      5     46 0.1157109
## Residuals                   50                                         
##                                
## device                      ***
## years.cat                   ** 
## age.range.cat                  
## prior.experience.status     .  
## videogame.experience.status    
## Residuals                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Summary of the analysis of variance model:

# Look to see which differ
summary.aov(res.manc)
##  Response travel.distance :
##                             Df Sum Sq Mean Sq F value Pr(>F)    
## device                       1  47599   47599 15.1048 0.0003 ***
## years.cat                    1     33      33  0.0105 0.9188    
## age.range.cat                1   2628    2628  0.8341 0.3655    
## prior.experience.status      1   6103    6103  1.9366 0.1702    
## videogame.experience.status  1    514     514  0.1632 0.6879    
## Residuals                   50 157564    3151                   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response number.clutches :
##                             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device                       1  13.533 13.5331  3.1294 0.08299 .
## years.cat                    1  10.003 10.0033  2.3132 0.13458  
## age.range.cat                1   0.256  0.2563  0.0593 0.80866  
## prior.experience.status      1   0.010  0.0104  0.0024 0.96102  
## videogame.experience.status  1  10.339 10.3393  2.3909 0.12835  
## Residuals                   50 216.225  4.3245                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response sum.gimbal.angle :
##                             Df  Sum Sq Mean Sq F value  Pr(>F)  
## device                       1 0.01877 0.01877  0.3190 0.57473  
## years.cat                    1 0.37928 0.37928  6.4445 0.01429 *
## age.range.cat                1 0.00001 0.00001  0.0002 0.98760  
## prior.experience.status      1 0.00895 0.00895  0.1521 0.69815  
## videogame.experience.status  1 0.11571 0.11571  1.9661 0.16704  
## Residuals                   50 2.94266 0.05885                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response sum.arm.angle :
##                             Df   Sum Sq  Mean Sq F value    Pr(>F)    
## device                       1 0.075155 0.075155 14.1302 0.0004473 ***
## years.cat                    1 0.033148 0.033148  6.2323 0.0158850 *  
## age.range.cat                1 0.000250 0.000250  0.0470 0.8292437    
## prior.experience.status      1 0.007858 0.007858  1.4774 0.2298856    
## videogame.experience.status  1 0.003004 0.003004  0.5648 0.4558599    
## Residuals                   50 0.265939 0.005319                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
##  Response sos.force :
##                             Df     Sum Sq   Mean Sq F value   Pr(>F)   
## device                       1  383400647 383400647  8.3939 0.005574 **
## years.cat                    1  102705897 102705897  2.2486 0.140026   
## age.range.cat                1   46014668  46014668  1.0074 0.320356   
## prior.experience.status      1   13852109  13852109  0.3033 0.584293   
## videogame.experience.status  1    6259909   6259909  0.1370 0.712795   
## Residuals                   50 2283804938  45676099                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## 34 observations deleted due to missingness

2.2.3 Post-hoc Tukey’s Test for the Pair-wise Differences among the Hand-controllers

In the snippet below, the post-hoc Tukey’s tests were applied on the significant variables obtained in the previous section. The confidence intervals within the family-wise confidence level graphs that does not include zero indicates a significant different between the corresponding pair of hand-controllers.

2.2.3.1 Travel Distance

# travel.distance 
print("travel.distance:")
## [1] "travel.distance:"
tk.lm <- lm(travel.distance 
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                       diff        lwr       upr     p adj
## Premium-Excalibur 55.27976   2.604844 107.95468 0.0374330
## Sigma7-Excalibur  73.86901  21.194096 126.54393 0.0034630
## Sigma7-Premium    18.58925 -34.085665  71.26417 0.6783576
plot(tukey.test, sub="for Travel Distance", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.2.3.2 Number of Clutches

# number.clutches
print("number.clutches:")
## [1] "number.clutches:"
tk.lm <- lm(number.clutches
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                        diff        lwr      upr     p adj
## Premium-Excalibur 0.9405556 -0.5411692 2.422280 0.2895704
## Sigma7-Excalibur  1.1176389 -0.3640858 2.599364 0.1760761
## Sigma7-Premium    0.1770833 -1.3046414 1.658808 0.9562438
plot(tukey.test, sub="for Number of Clutches", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.2.3.3 Force RMS

# sos.force
print("sos.force:")
## [1] "sos.force:"
tk.lm <- lm(sos.force
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                        diff       lwr       upr     p adj
## Premium-Excalibur 4846.8949 -1760.800 11454.589 0.1930444
## Sigma7-Excalibur  5518.1530 -1089.541 12125.848 0.1204080
## Sigma7-Premium     671.2581 -5936.436  7278.953 0.9681831
plot(tukey.test, sub="for Force RMS", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.2.3.4 Gimbal Angle Sum

# sum.gimbal.angle
print("sum.gimbal.angle:")
## [1] "sum.gimbal.angle:"
tk.lm <- lm(sum.gimbal.angle
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                          diff         lwr        upr     p adj
## Premium-Excalibur  0.03622276 -0.09458567  0.1670312 0.7870287
## Sigma7-Excalibur  -0.27766402 -0.40847245 -0.1468556 0.0000068
## Sigma7-Premium    -0.31388678 -0.44469521 -0.1830783 0.0000004
plot(tukey.test, sub="for Gimbal Angle Sum", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")

2.2.3.5 Arm Angle Sum

# sum.arm.angle
print("sum.arm.angle:")
## [1] "sum.arm.angle:"
tk.lm <- lm(sum.arm.angle
            ~ device,
            data = df_transformed)
tk.av <- aov(tk.lm)

tukey.test <- TukeyHSD(tk.av)
tukey.test
##   Tukey multiple comparisons of means
##     95% family-wise confidence level
## 
## Fit: aov(formula = tk.lm)
## 
## $device
##                          diff        lwr         upr     p adj
## Premium-Excalibur -0.06833722 -0.1066978 -0.02997669 0.0007343
plot(tukey.test, sub="for Arm Angle Sum", cex.main=2, cex.lab=1.5, cex.axis=1, col = "blue")


2.3 Establishing a Regression Model Between the Subjective and Objective Performance Metrics

In this section, a relationship between the objective performance metrics derived from the sensor data within the hand-controller and the subjective performance measures collected through the survey from users was established through regression analysis. The goal was to introduce a guideline for defining the influencing device factors and managing them in order to optimize the performance. A PCA identified the highest influencing factors. The performance measures that demonstrated positive correlation or a significance level (\(p <\) 0.05) between the devices were included in the regression model. The measures were selected in a matter that maximized the significance of the regression model and the Adjusted \(R-squared\) value.

2.3.1 Principal Component Analysis

The first graph is the scree plot that visualizes eigenvalues and shows the percentage of variances explained by each principal component. The second figure shows the positive correlated variables point to the same side of the plot and the negative correlated variables point to opposite sides. The longer arrows (colored green) have the largest contribution to the principal component. In the third figure, each point corresponds to the principal component of a user’s response on a device. The mean and covariance of responses on each device are plotted as the shaded oval shapes with the respective centers. The segregation between each device shows their differences in the users’ opinion.

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurvey.RData",
    sep = ""
  )
)


df_transformed <-
  rbind(
    cbind(subj_survey[[1]][, 2:7], subj_survey[[2]][, 6:28], device = 'Excalibur'),
    cbind(subj_survey[[1]][, 2:7], subj_survey[[3]][, 6:28], device = 'Premium'),
    cbind(subj_survey[[1]][, 2:7], subj_survey[[4]][, 6:28], device = 'Sigma7')
  )


subj_dev <- numeric(length = 90)
devices <- c('E', 'P', 'S')
for (j in 1:3) {
  for (i in 1:30) {
    subj_dev[30 * (j - 1) + i] <- sprintf("%s %s", devices[j], i)
  }
}
row.names(df_transformed) <- subj_dev


res.pca <- prcomp( ~ .,
                   data = df_transformed[, 7:29],
                   na.action = na.omit,
                   scale = TRUE)

# Visualize eigenvalues (scree plot). Show the percentage of variances
# explained by each principal component
fviz_eig(res.pca, barcolor = "lightgoldenrod", barfill = "lightgoldenrod", addlabels = TRUE)

# Graph of variables. Positive correlated variables point to the same side of the plot.
# Negative correlated variables point to opposite sides of the graph.
fviz_pca_var(
  res.pca,
  col.var = "contrib",
  # Color by contributions to the PC
  gradient.cols = 
    c("goldenrod1",
      "gold1",
      "gold",
      "lightgoldenrod",
      "aquamarine2",
      "aquamarine3",
      "aquamarine4"
    ),
  repel = TRUE     # Avoid text overlapping
)

# Grouping the variables
groups <- as.factor(df_transformed$device[1:81])
fviz_pca_ind(
  res.pca,
  col.ind = groups,
  # color by groups
  palette = c("mediumaquamarine", "salmon", "cornflowerblue"),
  addEllipses = TRUE,
  # Concentration ellipses
  ellipse.type = "confidence",
  legend.title = "Groups",
  repel = TRUE
)

2.3.2 Regression Model Establishment

The regression model that relates the sensor-based performance feature to the mean values of subjective metrics is provided below. Metrics that demonstrated significance in the test and large positive correlations and in the PCA were selected for the regression analysis, i.e. A1, A4, A5, A6, A7, A8, A9, A10, A13, A14, A15, A16, A17, and A18 for the regression analysis. The coefficients estimate the trends while \(R-squared\) represents the scatter around the regression line. A \(p =\) 0.0002 shows the significance of the model regardless of the value for \(R-squared\) models. Small \(R-squared\) values are problematic when precise predictions are needed. However, in this case, our goal was to identify the trends that optimize performance with the hand-controller. The small values for \(R-squared\) are common in models related to human performance because individuals are unpredictable. In spite of small \(R-squared\) values, the small \(p-value\) indicates a real relationship between the significant predictors and the response variable.

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData",
    sep = ""
  )
)

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
    sep = ""
  )
)

df_transformed <-
  rbind(
    cbind(subj_survey_norm[[1]][, 2:7], subj_survey_norm[[2]][, 6:28],
          subj_feature_norm[[1]][, 2:6], device = 'Excalibur'),
    cbind(subj_survey_norm[[1]][, 2:7], subj_survey_norm[[3]][, 6:28],
          subj_feature_norm[[2]][, 2:6], device = 'Premium'),
    cbind(subj_survey_norm[[1]][, 2:7], subj_survey_norm[[4]][, 6:28],
          subj_feature_norm[[3]][, 2:6], device = 'Sigma7')
  )

# A4 is important
mlm1 <- lm(
  rowMeans(
    cbind(A1,
          A4,
          A5, 
          A6, 
          A7, 
          A8, 
          A9,
          A10,
          A13,
          A14,
          A15, 
          A16, 
          A17, 
          A18)
  ) ~ completion.time.avg +
    travel.distance + number.clutches + sum.gimbal.angle +
    sos.force,
  data = df_transformed
)

summary(mlm1)
## 
## Call:
## lm(formula = rowMeans(cbind(A1, A4, A5, A6, A7, A8, A9, A10, 
##     A13, A14, A15, A16, A17, A18)) ~ completion.time.avg + travel.distance + 
##     number.clutches + sum.gimbal.angle + sos.force, data = df_transformed)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.35046 -0.07563  0.00305  0.10020  0.27577 
## 
## Coefficients:
##                      Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          0.761911   0.070554  10.799  < 2e-16 ***
## completion.time.avg -0.006987   0.081972  -0.085  0.93230    
## travel.distance     -0.988784   0.351644  -2.812  0.00626 ** 
## number.clutches     -0.164889   0.092935  -1.774  0.08003 .  
## sum.gimbal.angle     0.597293   0.343087   1.741  0.08574 .  
## sos.force            0.772836   0.294695   2.622  0.01054 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.1323 on 76 degrees of freedom
##   (8 observations deleted due to missingness)
## Multiple R-squared:  0.2678, Adjusted R-squared:  0.2196 
## F-statistic: 5.558 on 5 and 76 DF,  p-value: 0.0002043

The regression model formula is as follows:

\[ Perf_{subj} = 0.76 - 0.01 \ CT - 0.99 \ TD - 0.16 \ NC + 0.6 \ GA_{sum} - 0.77 \ SOSF, \]

where \(Perf_{subj}\) is the average subjective performance measure, \(CT\) is completion time, \(TD\) is travel distance (measured from Kuka’s end effector), \(NC\) is the number of clutches, \(GA_{sum}\) is the sum of changes in gimbal angles, and \(SOSF\) is the squared root of sum of the squared forces, all associated with the objective performances from device sensors and calculated across the 16 (sub-)task repetitions.

2.3.3 Model Optimization

After deriving the regression model equation, the equation was optimized in order to identify the features that contributed to achieving the highest performance with the hand-controller as follows:

f <- function(x) {
  stopifnot(is.numeric(x))
  stopifnot(is.finite(x))
  stopifnot(length(x) == 5)
  x1 <- x[1]
  x2 <- x[2]
  x3 <- x[3]
  x4 <- x[4]
  x5 <- x[5]
  - 1 * (0.76 -
  (0.01 * x1) -
  (0.99 * x2) -
  (0.16 * x3) +
  (0.6  * x4) -
  (0.77 * x5))
}

ftoo <-
  deriv(
  expression(-1 * (
  0.76 -
  (0.01 * x1) -
  (0.99 * x2) -
  (0.16 * x3) +
  (0.6  * x4) -
  (0.77 * x5)
  )),
  namevec = c("x1", "x2", "x3", "x4", "x5"),
  function.arg = TRUE
  )

ftootoo <- function(x) {
  stopifnot(is.numeric(x))
  stopifnot(is.finite(x))
  stopifnot(length(x) == 5)
  x1 <- x[1]
  x2 <- x[2]
  x3 <- x[3]
  x4 <- x[4]
  x5 <- x[5]
  ftoo(x1, x2, x3, x4, x5)
}

print("Negative of the equation is optimized to get the maximum:")
## [1] "Negative of the equation is optimized to get the maximum:"
optim(c(0.5, 0.5, 0.5, 0.5, 0.5), 
      f, 
      function(x) attr(ftootoo(x), "gradient"),
      method = "BFGS",
      lower = c(0, 0, 0, 0, 0), 
      upper = c(1, 1, 1, 1, 1))
## $par
## [1] 0 0 0 1 0
## 
## $value
## [1] -1.36
## 
## $counts
## function gradient 
##        8        8 
## 
## $convergence
## [1] 0
## 
## $message
## [1] "CONVERGENCE: NORM OF PROJECTED GRADIENT <= PGTOL"

In order to maximize the equation, the negative of the equation was minimized. The optimization was performed based on a quasi-Newton method (also known as a variable metric algorithm). This method uses function values and gradients to build up a picture of the surface to be optimized. The results suggest the following values on a scale of 0-1 for obtaining the maximum performance of the hand-controller:

\(CT = 0\)

\(TD = 0\)

\(NC = 0\)

\(GA_{sum} = 1\)

\(SOSF = 0\)


3 Surgeon Skill Prediction

In order to apply a prediction algorithm for identifying the surgeon’s skill based on the hand-controller performance features, the first step would be to learn the patterns in data through exploration and visualization. In this section, a data visualization is provided to determine the correlation between the pairs of features and distribution of data for each class of Novice and Expert surgeons.

# reading data from the saved file
load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/ResultsGenSurveyNorm.RData",
    sep = ""
  )
)

load(
  paste(
    "~/Desktop/Canada/neuroArm/neuroArmPLUS",
    "/Smart Handcontroller/Part 2/data/FeatureGenSensorNorm.RData",
    sep = ""
  )
)


# find the average of features across 3 devices
subj_feature_norm_ESP <-
  (
    cbind(
      as.numeric(subj_survey_norm[[2]][, 2]),
      subj_survey_norm[[2]][, 3],
      subj_feature_norm[[1]][, 2:5]
    ) +
      cbind(
        as.numeric(subj_survey_norm[[3]][, 2]),
        subj_survey_norm[[3]][, 3],
        subj_feature_norm[[2]][, 2:5]
      ) +
      cbind(
        as.numeric(subj_survey_norm[[4]][, 2]),
        subj_survey_norm[[4]][, 3],
        subj_feature_norm[[3]][, 2:5]
      )
  ) / 3
experiece.level <- subj_survey_norm[[1]][, 3]
experiece.level[experiece.level == 1 | experiece.level == 2] <- "Novice"
experiece.level[experiece.level == 3 | experiece.level == 4] <- "Expert"
experiece.level[11] <- "Expert"  # assigning the missing value

lr_data <- cbind(subj_feature_norm_ESP, as.character(experiece.level))
row_names <- experiece.level
col_names <- c("Completion Time", "Error Count", "Travel Distance", "Number of Clutches", "Force RMS", "Gimbal Angle Sum", "Experience Level")
colnames(lr_data) <- col_names

# replacing the missing error count with the mean of its group
lr_data[4, 2] <- mean(lr_data[lr_data[, 7] == "Novice", 2], na.rm = TRUE) 

3.1 Feature Data Correlation and Distribution

The figure below is the box plot of the features indicating the median, maximum, and minimum in data.

par(mfrow = c(1,6))  # number of subgraphs
par(pin = c(2,5))  # plot areas for graphs
for (i in 1:6) {
boxplot(lr_data[, i],
        main = col_names[i],
        outline = FALSE,
        xlab = "Value",
        cex.main = 3,
        cex.lab = 2,
        cex.axis = 2,
        col = "gold"
       )
}

The histograms, correlation plots and scatter plots in the figure below illustrate the data distribution in each class and the feature correlation. The distribution of each variable is shown in the diagonal line of boxes through the grid. The boxes below the diagonal line display the bivariate scatter plots. In the scatter plots, the red colored circles show Expert and the greens show Novice surgeons. The linear fits and the correlation ellipses are provided for the scattered data. The boxes above the diagnoal line provide the Pearson correlation and the significance level as stars. Each significance level is associated to a symbol: \(p-values\) (0, 0.001, 0.01, 0.05, 0.1, 1) corresponds to \(symbols\) (\("***"\), \("**"\), \("*"\), \("."\), \(" "\)).

# par(mfrow = c(1,1))
# par(pin = c(5,10))  # plot areas for graphs
# col1 <-
#   colorRampPalette(
#     c("goldenrod1",
#       "gold",
#       "gold1",
#       "lightgoldenrod1",
#       "lightgoldenrodyellow",
#       "aquamarine",
#       "aquamarine2",
#       "aquamarine4"
#       )
#   )
# correlations <- cor(lr_data[, -7])
# res1 <- cor.mtest(correlations, conf.level = .95)
# corrplot(
#   correlations,
#   title = "The Graphical Display of the Features Correlation Matrix",
#   cex.main = 3,
#   type = "upper",
#   order = "hclust", # reordering the fatures based on the correlation cluster
#   p.mat = res1$p,
#   sig.level = .2,  # crossed the insignificant value according to the significant level
#   method = "circle",
#   col = col1(100),
#   tl.col = "black",
#   tl.cex = 2,
#   cl.cex = 1.5,
#   mar=c(0,0,5,0)
# )


# The distribution of each variable is shown on the diagonal.
# On the bottom of the diagonal : 
# the bivariate scatter plots with a fitted line are displayed
# On the top of the diagonal : 
# the value of the correlation plus the significance level as stars
# Each significance level is associated to a symbol : 
# p-values(0, 0.001, 0.01, 0.05, 0.1, 1) <=> symbols(“***”, “**”, “*”, “.”, " “)
pairs.panels(
  lr_data[, -7],
  bg = c("firebrick1", "lightgreen")[lr_data$`Experience Level`],
  hist.col = "lightgoldenrod1",
  cor = TRUE,
  method = "pearson",
  main = "Histograms, Scatter Plots and Correlations of Features",
  density = TRUE,
  ellipses = TRUE,
  stars = TRUE,
  lm = TRUE,
  pch = 21,
  cex = 2,
  cex.main = 3,
  cex.cor = 0.5,
  cex.axis = 2,
  cex.labels = 2.5,
  font.labels = 2
)

The distribution of data were normal or a mixture with the median points towards the lower values. This indicated existence of a few participants who had a noticeably different performance in comparison to the rest. All of the features were positively correlated except for wrist angular movements and number of clutches. Each clutch indicates a moment where the user had to revert the hand-controller back into its optimal workspace, which would require time and a distance travelled for the maneuver. These users spent less time in the optimal workspace and had to use more clutches. Therefore used more arm movements and less wrist movements and exerted higher forces. This likely related to forces being more difficult to control once out of the optimal workspace, i.e. approaching singularity, or these users were novices who had not yet finessed their surgical technique and motion. Those users who took longer to complete the task travelled longer distances on the board, used a higher number of clutches and required more force to complete the task. These findings are consistent with the evolution of a surgeon over a training period, where the individual demonstrates a tendency to use smaller movements with less force in the optimal workspace (Sugiyama et al., 2018 and Uemura et al., 2014). Interestingly, the error count did not correlate significantly with any of the features.

3.2 Feature Density Plots

The density plots for the features are provided in in the figures below. The plots illustrate the level of separation between Experts and Novices with regards to each feature. In each plot there is overlap between the two groups, which highlights the difficulty of making a prediction. There were differences in the mean values of the Gimbal Angle Sum as well as the spread of data along the range for Force RMS, Completion Time, and Travel Distance, with a wider distribution (larger dispersion along the range) for the Novice group. Additionally, the Number of Clutches and Error Count follow a similar pattern with a semi-bimodal distribution for the Novice group.

featurePlot(
  x = lr_data[,-7],
  y = lr_data[, 7],
  plot = "density",
  scales = list(
    x = list(relation = "free"),
    y = list(relation = "free")
  ),
  par.settings = list(
    superpose.line = list(col = c("firebrick1", "lightgreen")),
    strip.background = list(col = "lightgoldenrod1")
  ),
  adjust = 1.5,
  pch = "|",
  col = c("firebrick1", "lightgreen"),
  main = "Density Plots of the Surgeon Skill Features",
  auto.key = list(columns = 3)
)

3.3 Building Logistic Regression Model

In this section, a binomial Logistic Regression model is developed after partitioning the data into training and testing sets (70% training and 30% testing). The summary of model is provided as follows where returns the estimate, standard errors, \(z-score\), and \(p-values\) on each of the coefficients. Although the factors seem to be not significant according to \(0.05\) threshold, the model is can be used as this insignificancy is mostly due to the small available sample size and the subjective differences among the individuals in each group of skill.

col_names <- c("completion.time", "error.count", "travel.distance", "number.clutches", "sos.force", "sum.gimbal.angle", "experience.level")
colnames(lr_data) <- col_names


## 70% of the sample size
smp_size <- floor(0.7 * nrow(lr_data))

## set the seed to make your partition reproducible
set.seed(123)
train_ind <- sample(seq_len(nrow(lr_data)), size = smp_size)
# train_ind <- c(1, 2, 3, 4, 6, 7, 8, 9 ,10 ,11, 12, 13, 14, 15, 16, 17, 18, 22, 24, 25, 30)

train <- lr_data[train_ind, ]
test <- lr_data[-train_ind, ]

glm.fit.1 <-
  glm(
    experience.level ~ completion.time +
      error.count +
      travel.distance +
      number.clutches +
      sos.force +
      sum.gimbal.angle,
    data = train,
    family = binomial
  )

glm.fit.2 <-
  glm(
    experience.level ~ completion.time +
      error.count +
      travel.distance +
      sos.force +
      sum.gimbal.angle,
    data = train,
    family = binomial
  )

summary(glm.fit.2)
## 
## Call:
## glm(formula = experience.level ~ completion.time + error.count + 
##     travel.distance + sos.force + sum.gimbal.angle, family = binomial, 
##     data = train)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.3811  -0.7295   0.2243   0.5650   1.2950  
## 
## Coefficients:
##                  Estimate Std. Error z value Pr(>|z|)  
## (Intercept)         5.333      4.594   1.161   0.2458  
## completion.time     3.460      3.838   0.902   0.3673  
## error.count         1.469      6.156   0.239   0.8114  
## travel.distance    52.838     30.675   1.722   0.0850 .
## sos.force         -42.033     23.168  -1.814   0.0696 .
## sum.gimbal.angle  -72.244     36.554  -1.976   0.0481 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 26.734  on 20  degrees of freedom
## Residual deviance: 16.897  on 15  degrees of freedom
## AIC: 28.897
## 
## Number of Fisher Scoring iterations: 6

3.3.1 Likelihood Ratio Test

To show the higher prediction power of the logistic regression model, reducing the number of predictors can be a solution. The significance of having a reduced model was evaluated by the likelihood ratio test, which compares the likelihood of the data in the full model against the reduced model. Considering the amount of overlap in the feature density plots, and following different experimentations, Number of Clutches was removed from the logistic regression model allowing for significant \(p-values\) (or close to significant level), i.e. for sum.gimbal.angle, sos.force, and travel.distance. The likelihood test with \(p-values\) > 0.05 indicates that the predictor variable removal would not negatively impact the model fitness, i.e. difference between the two models is not statistically significant. Below is the result of the likelihood ratio test showing \(p-values\) > 0.05. Thus, it was safe to remove the number.clutches from the full model.

lr.test <- lrtest(glm.fit.1, glm.fit.2) # Likelihood ratio test
data.matrix(lr.test)
##   #Df    LogLik Df     Chisq Pr(>Chisq)
## 1   7 -8.280947 NA        NA         NA
## 2   6 -8.448554 -1 0.3352139   0.562605

3.3.2 Pseudo \(R^{2}\) Determination

In order to assess the performance of logistic regression model regarding the proportion of variance in the dependent variable that is explained by the predictor (similar to ordinary least squares regression), Pseudo \(R^{2}\) statistic was calculated. Particularly, McFadden’s \(R^{2}\) defined as \(1-\frac{ln(LM)}{ln(L0)}\), where \(ln(LM)\) and \(ln(L0)\) are the log likelihoods of the fitted and null (only intercept presents) models, respectively. The McFadden’s \(R^{2}\) value ranges between 0-1, and the values close to 0 attribute to the models with no prediction power. In our case, this value is equal to 0.37, which is acceptable considering the nature of the problem and the available number of data samples.

pR2(glm.fit.2) # Pseudo R^2 - look for McFadden's R2 (0-1) - values close to 0 have no prediction power
##         llh     llhNull          G2    McFadden        r2ML        r2CU 
##  -8.4485539 -13.3667975   9.8364873   0.3679448   0.3739995   0.5194311

The regression model formula is as follows:

\[ Skill_{subj} = 5.33 + 3.46 \ CT + 1.47 \ EC + 52.84 \ TD - 42.03 \ SOSF - 72.24 \ GA_{sum}, \]

where \(Skill_{subj}\) is the weight indicator for the skill group, \(CT\) is completion time, \(EC\) is error count, \(TD\) is travel distance (measured from Kuka’s end effector), \(SOSF\) is the squared root of sum of the squared forces, and \(GA_{sum}\) is the sum of changes in gimbal angles.

3.3.3 Skill Prediction using the Logistic Regression Model

The skill prediction for the testing group of surgeons (9 surgeons) considering their performance features is provided below. The prediction is made based on the regression model previously trained on the training set of surgeons (21 surgeons). The results of 5-fold cross-validation indicate 78% accuracy, 67% sensitivity, and 83% specificity for the skill prediction using their hand-controller performance features (two misclassification over the 9 cases).

# 5-fold cross validation
ctrl <-
  trainControl(method = "repeatedcv",
               number = 5,
               savePredictions = TRUE)

mod_fit <- train(
  experience.level ~ completion.time +
    error.count +
    travel.distance +
    sos.force +
    sum.gimbal.angle,
  data = lr_data,
  method = "glm",
  family = "binomial",
  trControl = ctrl,
  tuneLength = 5
)

glm.pred = predict(mod_fit, newdata = test)
ptable <- confusionMatrix(data = glm.pred, test$experience.level)
ptable
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction Expert Novice
##     Expert      2      1
##     Novice      1      5
##                                           
##                Accuracy : 0.7778          
##                  95% CI : (0.3999, 0.9719)
##     No Information Rate : 0.6667          
##     P-Value [Acc > NIR] : 0.3772          
##                                           
##                   Kappa : 0.5             
##                                           
##  Mcnemar's Test P-Value : 1.0000          
##                                           
##             Sensitivity : 0.6667          
##             Specificity : 0.8333          
##          Pos Pred Value : 0.6667          
##          Neg Pred Value : 0.8333          
##              Prevalence : 0.3333          
##          Detection Rate : 0.2222          
##    Detection Prevalence : 0.3333          
##       Balanced Accuracy : 0.7500          
##                                           
##        'Positive' Class : Expert          
## 
fourfoldplot(ptable$table,
             main = "Confusion Matrix")

# # AUC calculation
# glm.pred.auc = predict(glm.fit.2, newdata = test, type = "response")
# pred <- prediction(glm.pred.auc, test$experience.level)
# perf <- performance(pred, measure = "tpr", x.measure = "fpr")
# plot(perf)
# auc <- performance(pred, measure = "auc")
# auc <- auc@y.values[[1]]
# print(paste0("AUC = ", auc))

4 Conclusions

These results indicate that a desired device should have a minimum Completion Time, Travel Distance, Number of Clutches, and Force Magnitude. However, a maximum Gimbal Angle Sum will lead to a higher performance. In addition, the higher coefficients associated with \(TD\) and \(SOSF\) in the regression equation, i.e. -0.99 and 0.76, respectively, shows the higher contribution of travel distance and force magnitude value to the performance of a hand-controller. In addition, an algorithm for surgeon skill prediction is provided that can predict the Novice and Expert surgeons based on their performance in using the hand-controllers with 78% accuracy, 66% sensitivity, and 83% specificity.


  1. Project neuroArm, Department of Clinical Neurosciences, University of Calgary.

  2. Project neuroArm, Department of Clinical Neurosciences, University of Calgary.

  3. Project neuroArm, Department of Clinical Neurosciences, University of Calgary.

  4. Project neuroArm, Department of Clinical Neurosciences, University of Calgary.

  5. Project neuroArm, Department of Clinical Neurosciences, University of Calgary. Corresponding author. garnette@ucalgary.ca.